Groovy SQL Child Collection Persistence - groovy

I've got a class set up kind of like this:
class ParentClass{
// Some other fields
Set<ChildClass> children
}
I'm wanting to use groovy.sql.Sql to keep the related ChildClass objects appropriately persisted in relationship to the ParentClass. I've used ORM tools like Hibernate before, and I'd rather stick to just using groovy.sql.Sql if at all possible.
I'm wondering if groovy.sql.Sql has any sort of convenience helpers for keeping child collections syncronized? I don't mind writing closures and whatnot to do a comparison of the "currently persisted" set vs the "newly persisted" set, to decide what to add and remove, but I was kind of hoping groovy already took care of that for me.

As far as I know groovy has no such mechanism. I suppose You write Your own DSL for that but I see it as rather complicated (and prone to DB scheme changes) and don't know if the game is worth the candle.
If You don't like using ORM tools (I also always hesitate to use them) maybe try something that isn't an ORM tool but helps to avoid plain SQL in groovy code: jOOQ (as far as I know there's no relationship handling in jOOQ). Haven't used it yet but still want to try.

Related

how i get to know is my class diagram in perfect or not where i am quit clear in how i am going to code

I have done a web site project in PHP using mySQL at school, which was not object oriented but that was in a manner on which I want to display my content. Then I changed the same project into object orientated classes where I use same CRUD queries in functions of that classes and they interact with a DBWrapper class. Or can say, I just cut the php content and pasted it into the functions and and call that functionality through object. that all was done without documentation. But now i am making a project in .net with documentation how ever its a web based app actually and i have the idea of getting data from database trough queries and o of course c# is different but CRUD is something which is similar in any language. so as i had decide first this thing will display and from thing the next this thing will display etc. about codding than how should i know my class diagram is the same as what i am getting and what that actually is. because i will connect both classes as i wnt to display . and plus is Do we write object of other class as an attribute of second class if that is going to use in it.
Most class diagrams I've seen and made include only the business model entities. Most of the time UML diagrams are used to communicate and document the workings of the system. I like to think of them as pseudo-code.
Please refer to this other question as well: https://softwareengineering.stackexchange.com/questions/190429/what-classes-to-put-exactly-in-a-class-diagram
However, if you feel your implementation ended up with a lot of helper classes then it's probably good to review your system's structure to make sure you are coding "object oriented". Actually making the class diagram is supposed to help you realize what you can improve.
I suggest you also take a look at design patterns. This link might be useful, as you mention experience with C# http://www.dotnettricks.com/learn/designpatterns

ServiceStack Swapping ORMLite to Entity Framework

I want to replace ORMLite to EF5, and please don't ask me why :P ... So I searched around the net and have no luck finding much information on how to actually do this.
Do I need to rewrite ORMLiteConnectionFactory into EFConnectionFactory that registers in global.asax.cs? It seems like a lot to implement and very complex because it is linked to IOrmLiteDialectProvider OrmLiteConfig and all that, and it doesn't seem right because SS normally have a simple answer to all questions. For example, it is rather easy if I want to change Funq to another DI provider.
Is ORMLite the fixed choice of weapon or is this a flexible option that I can tune? Please help.
For all intents and purposes you're better off pretending OrmLite doesn't exist. OrmLite simply provides extension methods off ADO.NET's raw IDbConnection interfaces which works similar to (and why it is able to be used alongside with) Dapper and other Micro ORMS.
Entity Framework in contrast manages its own heavy abstraction that's by design not substitutable with other Micro ORMS, so you shouldn't attempt this route.
Simply ignore OrmLite exists and use Entity Framework as you normally would. Last I heard EF doesn't play too nicely with IOCs so you probably have to resort to the normal case of instantiating a new EF DataContext whenever you want to use it.

generalizing classes or not when using mapper for database

lets say i have the following classes:
customer, applicant, agency, partner
i need to work with databases and use mappers to map the objects to the database. my question is, can i generalize these classes with the class person and just implement a mapper for this class? basically it would look like this:
instead of this:
the mapper classes use orm to save and edit fields in the database, i use them because the application i am doing has to be developed further in the future.
If each of the classes (Partner, Applicant, etc.) has different attributes, you can't have only one mapper for all of the classes (well, you can, but then you would have to use meta-programming to retrieve information from the classes lower down the hierarchy). The solution also depends on how and who manages your database: do you have control over how it is designed or is it something that you cannot change? in the second case I would definitely use a mapper per class to allow full decoupling between DB and app. In the first case, you could use some kind of mapping hierarchy. And also, it depends on what language/frameworks you are using.
Good Luck!

Partial-mocking considered bad practice? (Mockito)

I'm unit-testing a business object using Mockito. The business object uses a DAO which normally gets data from a DB. To test the business object, I realized that it was easier to use a separate in-memory DAO (which keeps data in a HashMap) than to write all the
when(...).thenReturn(...)
statements. To create such a DAO, I started by partial-mocking my DAO interface like so:
when(daoMock.getById(anyInt())).then(new Answer() {
#Override
public Object answer(InvocationOnMock invocation) throws Throwable {
int id = (Integer) invocation.getArguments()[0];
return map.get(id);
}
});
but it occurred to me that it was easier to just implement a whole new DAO implementation myself (using in-memory HashMap) without even using Mockito (no need to get arguments out of that InvocationOnMock object) and make the tested business object use this new DAO.
Additionally, I've read that partial-mocking was considered bad practice. My question is: is what I'm doing a bad practice in my case? What are the downsides? To me this seems OK and I'm wondering what the potential problems could be.
I'm wondering why you need your fake DAO to be backed by a HashMap. I'm wondering whether your tests are too complex. I'm a big fan of having very simple test methods that each test one aspect of your SUT's behaviour. In principle, this is "one assertion per test", although sometimes I end up with a small handful of actual assert or verify lines, for example, if I'm asserting the correctness of a complex object. Please read http://blog.astrumfutura.com/2009/02/unit-testing-one-test-one-assertion-why-it-works/ or http://blog.jayfields.com/2007/06/testing-one-assertion-per-test.html to learn more about this principle.
So for each test method, you shouldn't be using your fake DAO over and over. Probably just once, or twice at the very most. Therefore, having a big HashMap full of data would seem to me to be EITHER redundant, OR an indication that your test is doing WAY more than it should. For each test method, you should really only need one or two items of data. If you set these up using a Mockito mock of your DAO interface, and put your when ... thenReturn in the test method itself, each test will be simple and readable, because the data that the particular test uses will be immediately visible.
You may also want to read up on the "arrange, act, assert" pattern, (http://www.arrangeactassert.com/why-and-what-is-arrange-act-assert/ and http://www.telerik.com/help/justmock/basic-usage-arrange-act-assert.html) and be careful about implementing this pattern INSIDE each test method, rather than having different parts of it scattered across your test class.
Without seeing more of your actual test code, it's difficult to know what other advice to give you. Mockito is supposed to make mocking easier, not harder; so if you've got a test where that's not happening for you, then it's certainly worth asking whether you're doing something non-standard. What you're doing is not "partial mocking", but it certainly seems like a bit of a testing smell to me. Not least because it couples lots of your test methods together - ask yourself what would happen if you had to change some of the data in the HashMap.
You may find https://softwareengineering.stackexchange.com/questions/158397/do-large-test-methods-indicate-a-code-smell useful too.
When testing my classes, I often use a combination of Mockito-made mocks and also fakes, which are very much what you are describing. In your situation I agree that a fake implementation sounds better.
There's nothing particularly wrong with partial mocks, but it makes it a little harder to determine when you're calling the real object and when you're calling your mocked method--especially because Mockito silently fails to mock final methods. Innocent-looking changes to the original class may change the implementation of the partial mock, causing your test to stop working.
If you have the flexibility, I recommend extracting an interface that exposes the method you need to call, which will make it easier whether you choose a mock or a fake.
To write a fake, implement that small interface without Mockito using a simple class (nested in your test, if you'd like). This will make it very easy to see what is happening; the downside is that if you write a very complicated Fake you may find you need to test the Fake too. If you have a lot of tests that could make use of a good Fake implementation, this may be worth the extra code.
I highly recommend "Mocks aren't Stubs", an article by Martin Fowler (famous for his book Refactoring). He goes over the names of different types of test doubles, and the differences between them.

How do you deal with DDD and EF4

I'm facing several problems trying to apply DDD with EF4 (in ASP MVC2 context). Your advaice would be greatly appreciated.
First of all, I started to use POCO because the dependacy on ObjectContext was not very comfortable in many situations.
Going to POCO solved some problems but the experience is not what I was used to with NHibernate.
I would like to know if it's possible to use designer and to generate not only entities but also a Value Objects (ComplexType?). If I mean Value Object is a class with one ctor without any set properties (T4 modification needed ?).
The only way I found to add behavior to anemic entities is to create partial classes that extends those generated by edmx. I'm not satisfied with this approach.
I don't know how to create several repositories with one edmx. For now I'm using a partial classes to group methods for each aggregate. Each group is a repository in fact.
The last question is about IQueryable. Should it be exposed outside the repository ? If I refer to the ble book, the repository should be a unit of execution and shouldn't expose something like IQueryable. What do you think ?
Thanks for your help.
Thomas
It's fine to use POCOs, but note that EntityObject doesn't require an ObjectContext.
Yes, Complex Types are value objects and yes, you can generate them in the designer. Select several properties of an entity, right click, and choose refactor into complex type.
I strongly recommend putting business methods in their own types, not on entities. "Anemic" types can be a problem if you must maintain them, but when they're codegened they're hardly a maintenance problem. Making business logic separate from entity types allows your business rules and your data model to evolve independently. Yes, you must use partial classes if you must mix these concerns, but I don't believe that separating your model and your rules is a bad thing.
I think that repositories should expose IQueryable, but you can make a good case that domain services should not. People often try to build their repositories into domain services, but remember that the repository exists only to abstract away persistence. Concerns like security should be in domain services, and you can make the case that having IQueryable there gives too much power to the consumer.
I think it's OK to expose IQueryable outside of the repository, only because not doing so could be unnecessarily restrictive. If you only expose data via methods like GetPeopleByBirthday and GetPeopleByLastName, what happens when somebody goes to search for a person by last name and birthday? Do you pull in all the people with the last name "Smith" and do a linear search for the birthday you want, or do you create a new method GetPeopleByBirthdayAndLastName? What about the poor hapless fellow who has to implement a QBE form?
Back when the only way to make ad hoc queries against the domain was to generate SQL, the only way to keep yourself safe was to offer just specific methods to retrieve and change data. Now that we have LINQ, though, there's no reason to keep the handcuffs on. Anybody can submit a query and you can execute it safely without concern.
Of course, you could be concerned that a user might be able to view another's data, but that's easy to mitigate because you can restrict what data you give out. For example:
public IQueryable<Content> Content
{
get { return Content.Where(c => c.UserId == this.UserId); }
}
This will make sure that the only Content rows that the user can get are those that have his UserId.
If your concern is the load on the database, you could do things like examine query expressions for table scans (accessing tables without Where clauses or with no indexed columns in the Where clause). Granted, that's non-trivial, and I wouldn't recommend it.
It's been some time since I asked that question and had a chance to do it on my own.
I don't think it's a good practice to expose IQueryable at all outside the DAL layer. It brings more problems that it solves. I'm talking about large MVC applications. First of all the refactorings is harder, many developers user IQueryable instances from the views and after struggle with the fact that when resolving IQueryable the connection was already disposed. Performance problems because all the database is often queried for a given set of resultats and so on.
I rather expose Ienumerable from my repositories and believe me, it saves me many troubles.

Resources