I have a Java\Spring\Hibernate application - complete with domain classes which are basically Hibernate POJOs
There is a piece of functionality that I think can be written well in Grails.
I wish to reuse the domain classes that I have created in the main Java app
What is the best way to do so ?
Should I write new domain classes extending the Java classes ? this sounds tacky
Or Can I 'generate' controllers off the Java domain classes ?
What are the best practices around reusing Java domain objects in Grails\Groovy
I am sure there must be others writing some pieces in grails\groovy
If you know about a tutorial which talks about such an integration- that would be awesome !!!
PS: I am quite a newbie in grails-groovy so may be missing the obvious. Thanks !!!
Knowing just how well Groovy and Grails excel at integrating with existing Java code, I think I might be a bit more optimistic than Michael about your options.
First thing is that you're already using Spring and Hibernate, and since your domain classes are already POJOs they should be easy to integrate with. Any Spring beans you might have can be specified in an XML file as usual (in grails-app/conf/spring/resources.xml) or much more simply using the Spring bean builder feature of Grails. They can then be accessed by name in any controller, view, service, etc. and worked with as usual.
Here are the options, as I see them, for integrating your domain classes and database schema:
Bypass GORM and load/save your domain objects exactly as you're already doing.
Grails doesn't force you to use GORM, so this should be quite straightforward: create a .jar of your Java code (if you haven't already) and drop it into the Grails app's lib directory. If your Java project is Mavenized, it's even easier: Grails 1.1 works with Maven, so you can create a pom.xml for your Grails app and add your Java project as a dependency as you would in any other (Java) project.
Either way you'll be able to import your classes (and any supporting classes) and proceed as usual. Because of Groovy's tight integration with Java, you'll be able to create objects, load them from the database, modify them, save them, validate them etc. exactly as you would in your Java project. You won't get all the conveniences of GORM this way, but you would have the advantage of working with your objects in a way that already makes sense to you (except maybe with a bit less code thanks to Groovy). You could always try this option first to get something working, then consider one of the other options later if it seems to make sense at that time.
One tip if you do try this option: abstract the actual persistence code into a Grails service (StorageService perhaps) and have your controllers call methods on it rather than handling persistence directly. This way you could replace that service with something else down the road if needed, and as long as you maintain the same interface your controllers won't be affected.
Create new Grails domain classes as subclasses of your existing Java classes.
This could be pretty straightforward if your classes are already written as proper beans, i.e. with getter/setter methods for all their properties. Grails will see these inherited properties as it would if they were written in the simpler Groovy style. You'll be able to specify how to validate each property, using either simple validation checks (not null, not blank, etc.) or with closures that do more complicated things, perhaps calling existing methods in their POJO superclasses.
You'll almost certainly need to tweak the mappings via the GORM mapping DSL to fit the realities of your existing database schema. Relationships would be where it might get tricky. For example, you might have some other solution where GORM expects a join table, though there may even be a way to work around differences such as these. I'd suggest learning as much as you can about GORM and its mapping DSL and then experiment with a few of your classes to see if this is a viable option.
Have Grails use your existing POJOs and Hibernate mappings directly.
I haven't tried this myself, but according to Grails's Hibernate Integration page this is supposed to be possible: "Grails also allows you to write your domain model in Java or re-use an existing domain model that has been mapped using Hibernate. All you have to do is place the necessary 'hibernate.cfg.xml' file and corresponding mappings files in the '%PROJECT_HOME%/grails-app/conf/hibernate' directory. You will still be able to call all of the dynamic persistent and query methods allowed in GORM!"
Googling "gorm legacy" turns up a number of helpful discussions and examples, for example this blog post by Glen Smith (co-author of the soon-to-be-released Grails in Action) where he shows a Hibernate mapping file used to integrate with "the legacy DB from Hell". Grails in Action has a chapter titled "Advanced GORM Kungfu" which promises a detailed discussion of this topic. I have a pre-release PDF of the book, and while I haven't gotten to that chapter yet, what I've read so far is very good, and the book covers many topics that aren't adequately discussed in other Grails books.
Sorry I can't offer any personal experience on this last option, but it does sound doable (and quite promising). Whichever option you choose, let us know how it turns out!
Do you really want/need to use Grails rather than just Groovy?
Grails really isn't something you can use to add a part to an existing web app. The whole "convention over configuration" approach means that you pretty much have to play by Grails' rules, otherwise there is no point in using it. And one of those rules is that domain objects are Groovy classes that are heavily "enhanced" by the Grails runtime.
It might be possible to have them extend existing Java classes, but I wouldn't bet on it - and all the Spring and Hibernate parts of your existing app would have to be discarded, or at least you'd have to spend a lot of effort to make them work in Grails. You'll be fighting the framework rather than profiting from it.
IMO you have two options:
Rewrite your app from scratch in Grails while reusing as much of the existing code as possible.
Keep your app as it is and add new stuff in Groovy, without using Grails.
The latter is probably better in your situation. Grails is meant to create new web apps very quickly, that's where it shines. Adding stuff to an existing app just isn't what it was made for.
EDIT:
Concerning the clarification in the comments: if you're planning to write basically a data entry/maintenance frontend for data used by another app and have the DB as the only communication channel between them, that might actually work quite well with Grails; it can certainly be configured to use an existing DB schema rather than creating its own from the domain classes (though the latter is less work).
This post provides some suggestions for using grails for wrapping existing Java classes in a web framework.
Related
After reading OSGi application design - am I abusing the service framework? and What is the best way of grouping OSGi bundles to make a coherent 'application', I'm now stuck with burning question of how to apply the "don't program OSGi" mantra.
Assuming the bounded context is the entire application and not the individual bundles, I should be free to declare some aggregate root entity in an API bundle, and in the DDD style, a repository for working with said entity. Now, the crux of the matter: an explicit repository appears to be antithetical to the OSGi style because a repository is itself a registry (of domain objects), and this design sidesteps the OSGi service registry. But to do away with the repository would require consumers to program OSGi in order to perform lookups of an entity.
It is said that repositories are just a façade--does this imply that I should create a repository implementation which delegates to the service registry? This does not appear to be a coherent approach since consumers would have two entry points into the domain model: the repository, and the service registry itself. To forgo the repository would no longer be DDD because we're back to mixing domain logic with framework code in a spaghetti-like fashion.
So what's the take-away here? Is domain-driven design incompatible with the "OSGi way," or am I missing some critical concept?
The rationale for not depending on OSGi is that as middleware OSGi visibility in your code makes it less cohesive. Domain code and OSGi code should not mix (just like it should not mix with JMS, Java EE, or any other API). Less cohesive code is more complicated and therefore error prone.
However, you always need some bridging code that links the infra-structure to your domain code. Since this bridging code is going to be coupled to some API anyway, leverage it to the hilt, there is absolutely no use in abstracting yourself from the environment for bridge code.
DS (and Blueprint, iPOJO, etc) hide the registry for the business logic, they are just very convenient ways to provide your domain registry. So with OSGi you hardly have to write any code to have a very powerful repository of domain objects without your domain objects themselves being aware of OSGi.
Yes, if you move your code from OSGi to somewhere else you have to rewrite the bridge code and coming from an OSGi world writing the generic functionality provided by OSGi is surprisingly large. However, not using the OSGi constructs because in event that the app is ported you have to provide is functionality is throwing away money.
Conclusion: domain could should not mix with OSGi, bridge code should leverage OSGi to the hilt.
I think you should simply register the repository as an OSGi service. The bundles that need the repository should reference the service and should only know of the repository interface. Of course this means you have to use either a ServiceTracker in an Activator or e.g. a blueprint context. I prefer the blueprint context as this way you do not have a real java level dependency on OSGi.
Of course this creates a dependency to OSGi in some way but this is not avoidable. The basic idea to follow is that you should keep the OSGi specific code out of your business code and have it in separate classes or in blueprint.
See one of my tutorials which shows a simple application with UI, a model and a persistence impl. While the application is too small to really do DDD the aproach should be fairly compatible. One thing you will notice is that the whole application does not import a single OSGi interface. One nice side effect is that you can easily reuse the code outside OSGi but then of course you have to solve the DI differently.
http://www.liquid-reality.de/x/DIBZ
Am Afraid If am Overdoing things here.
We recently started a .Net project containig different Class Libraries for DAl,Services and DTO.
Question is about our DAL layer we wanted a clean and easily maintained Data access layer, We wanted go with Entity Framework 4.1.
So still not clear about what to opt for Plain ADO.Net using DAO and DAOImpl methodolgy or
Entity Framework.
Could any one please suggest the best approach.
It depends on how much work you want to put into creating your own customized DAL. It is always better to use ADO.NET and your own implementations, but this also includes maintaining and optimizing it and treating complex cases such as concurrency, caching and the mapping of you BO, the DAL and the Database.
If you want to concentrate more on business value and functionality you might decide to go with Entity Framework (now 4.3 released and 5.0 to come). The advantage would be that you use a DAL that was carefully tested and that already contains solutions for concurrency, caching and mapping.
But I would hardly suggest using the Repository and Unit Of Work patterns on top of it to abstract the usage of Entity Framework out of your other layers. Then you would have the possibility to later completely change the underlying technologies without any impact on the other layers (you could replace EF with your own ADO.NET implementation if you see that the performance is not as good as it should be for example).
It depends on the type of application that you need to build and on its performance requirements. Using EF could really reduce your work and give you much quicker results. It also depends on the development teams capabilities. If you only have senior developers and architects working on the project then you will create you own DAL easily. But for beginners it is really hard to implement a good, optimized and robust DAL.
I hope that helps !
I've been using ADO.NET and DTO combination in DAL ever since i remember and i love the fact that i control the entire process of creating entities and methods. However that comes with the price of having to write classes for every entity and methods for every stored procedure. Which i don't mind, but recently i have discovered PLINQO for LINQ to SQL and I'm loving it. It gives you ease of creation/updating of Classes based on your Database schema while allowing for high levels of customization. Its basically LINQ2SQL on steroids.
I also liked nHibernate but i think it had steeper learning curve than PLINQO.
I'd give PLINQO a try if i was you
I have a question regarding Java EE security best practices.
What are the advantages and disadvantages of using either annotations or a deployment descriptor to define Security for a web application?
Are there cases where you favor one over the other?
Thank you in advance :)
Well, it is mater of fashion. Some years ago there was massive movement "to sepearate application instrumentation from the programming" (you can read, for example, spec of EJB, where there is special role for this even, this person is not have to be even programmer). In this way use of XML was indorsed (instead of plain txt file or property files). And than annotations bring back those XML file to the code. I think it is due the mass in Spring framework. It was really hard to configure application (there was no good way to "debug" your configuration). Using annotation is "lightweight" way to make configuration. In simple scenarios you can skip defining relationships between your components, because they can be inferred from you code elements.
Using annotations is elegant (you do not require additional XML files) but requires to recompile your code every time you made a change.
Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.
So, I have an web-based application that is using the Wicket 1.4 framework, and it uses Spring beans, the Java Persistence API (JPA), and the OpenSessionInView pattern. I'm hoping to find a security model that is declarative, but doesn't require gobs of XML configuration -- I'd prefer annotations.
Here are the options so far:
Spring Security (guide) - looks complete, but every guide I find that combines it with Wicket still calls it Acegi Security, which makes me think it must be old.
Wicket-Auth-Roles (guide 1 and guide 2) - Most guides recommend mixing this with Spring Security, and I love the declarative style of #Authorize("ROLE1","ROLE2",etc). I'm concerned about having to extend AuthenticatedWebApplication, since I'm already extending org.apache.wicket.protocol.http.WebApplication, and Spring is already proxying that behind org.apache.wicket.spring.SpringWebApplicationFactory.
SWARM / WASP (guide) - This looks the newest (though the main contributor passed away years ago), but I hate all of the JAAS-styled text files that declare permissions for principals. I also don't like the idea of making an Action class for every single thing a user might want to do. Secure models also aren't immediately obvious to me. Plus, there isn't an Authn example.
Additionally, it looks like lots of folks recommend mixing the first and second options. I can't tell what the best practice is at all, though.
I don't know if you saw this blog post so I'm adding it here as reference and I'll just quote the end:
Update 2009/03/12: those interested in securing Wicket
applications should also be aware that
there is an alternative to
Wicket-Security, called
wicket-auth-roles. This thread
will give you a good overview of the
status of the two frameworks.
Integrating wicket-auth-roles with
Spring Security is covered here.
One compelling feature of
wicket-auth-roles is the ability to
configure authorizations with Java
annotations. I find it somehow more
elegant than a centralized
configuration file. There is an
example here.
Based on the information above and the one your provided, and because I prefer annotations too, I'd go for Wicket-Auth-Roles with Spring Security (i.e. guide 2). Extending AuthenticatedWebApplication shouldn't be a problem as this class extends WebApplication. And pulling your application object out of spring context using SpringWebApplicationFactory should also just work.
And if your concerns are really big, this would be pretty easy and fast to confirm with a test IMO :)
We've been using Wicket-security for years now and we have used it together with jaas files and with annotatations. Defining jaas files is quite a hassle and maintaining them is near impossible...
With annotations one has to define actions and principals for every page. This is timeconsuming however it does allow you to let the user define roles and authorizations dynamically. It is also possible to test all the principals using the WicketTester.
Each of the 3 packages has it's (dis)advantages, it's a matter of taste and it also depends on the size of the application.