Entity Framework POCO Change Tracking Strategies - entity-framework-5

I have an N-Tier application where POCOs are populated by Entity Framework on the server side and transferred to my client applications. The clients make changes to the POCOs or add new POCOs and then send them back to the server to be stored in the database.
If I am using pure POCOs, i.e., no proxies, not self-tracking entities, what are some of the common approaches people are taking to solve the Change Tracking issue? If your service receives a collection of POCOs, how does it know to do an Add, Update or Delete using Entity Framework?

Entity Framework doesn't have good built-in support for such disconnected scenarios. I am aware of three general options:
Use GraphDiff, an open source add-on library
Advantages
No need to write change tracking code on client side
Common pattern to update disconnected object graphs in the database
Not much code to write on server side
Disadvantages
Database must be queried and entities have to be loaded to detect if an object has to be added, updated or deleted
Dependency on a third party library in addition to the EF core libraries
Update object graphs manually on server side (Example)
Advantages
No need to write change tracking code on client side
No dependency on a third party library in addition to the EF core libraries
Disadvantages
Database must be queried and entities have to be loaded to detect if an object has to be added, updated or deleted
No common pattern, i.e. most update scenarios require individual code
A lot of code to write on server side
Add properties for entity states to your objects and track changes manually on client side by setting the states accordingly (I don't have an example for this approach; I believe, Julie Lerman is using and recommending it)
Advantages
Database doesn't have to be queried to detect if an object has to be added, updated or deleted
No dependency on a third party library in addition to the EF core libraries
(Probably?) Common pattern on server side to translate tracked states into entity states of attached entities
Disadvantages
Change tracking code to write on client side
No common pattern on client side, i.e. most change tracking scenarios (and client types/UI technologies) require individual code

Related

Transition from RestKit to pure AFNetworking 2.0

I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.

How Does the Single Store Xpages work?

I have a number of XPAges design elements that I use in many different databases. If I read the wiki correctly the single store is an all or nothing situation.
So I want to create unique design in a database but use the set of reusable XPages element from a single store location. the wiki says:
Apart from the "dummy or blank XPage with the same name of the default XPage" in each instance application, does it matter if an 'instance' contains XPage design elements?
No. If SCXD is set on an application all XPages design elements are ignored on the database and the application uses the design elements on the SCXD database.
If this is the case then I have to create databases where probably 75% of the code is reusable but I would have to repeat it (and maintain it) in dozens of separate databases. pity!
XPages and related elements (Custom Controls, SSJS Libraries, Java Code) can be inherited from a specific template like other design elements. So, I would setup a database called, perhaps, "Core Components" (.ntf or .nsf) with a template name of "CoreComponents". Then on the individual elements in the target DB you would set inheritance to be specifically from the "CoreComponents" template. Then the elements that are unique to each database do not inherit from any template. You can then use File-Application-Refresh design to update the elements with specific inheritance and the one which are unique in that database will not get overwritten.
You do need to do a clean build after the refresh, so I recommend that you keep the Core Components database locally or on a different server than the others so that the daily design task will not update them resulting in corrupted xsp elements.
IBM's preferred model for reusing XPage artifacts across multiple applications is to create OSGi plugins that leverage the XPages Extensibility API.
NotesIn9 episode 64 demonstrates how to make an existing Custom Control design element a library component, which can then be used in any app that has the library available, instead of having to copy the design element to each app separately. Any subsequent changes to that component are then applied immediately to any apps that use it when a new version of the library is deployed.
If you truly have "dozens" of apps that all share certain features, but the entire design should not be identical across all of them, then the OSGi model is definitely the way to go.
But why not flip the entire model on its head? Traditionally, we've always put the code and the data in the same place (e.g. same NSF) because it was a pain to access -- and, especially, visually represent -- data in one NSF via code in another NSF. That's not true anymore. Why have dozens of apps just because the data lives in dozens of places? Any data source in XPages can be told where the data lives... you can link a central user interface to any number of "remote" data stores (either different NSFs on the same server, or even databases on other servers).
Red Pill, for instance, takes this to its logical extreme: they deploy one NSF, which acts as a portal to all your data, no matter where that data lives. The ACLs of the various NSFs (and Readers fields) still ensure that users don't pry into data they haven't been granted access to, and they have complex analytics algorithms for determining which data the users will actually care about. But if you have 500 NSFs in the domain, you're not maintaining 500 different code templates... it's literally just 1; but that one user interface is how users find, and interact with, all their data.
You certainly don't have to take this premise to that extreme, but perhaps you could identify, say, 5 apps where the UI and / or business logic is similar (or even identical), but the data just lives in multiple places. Create one central app for interacting with all of that data. Create a "homepage" that gives users a way to select which "app" they're trying to access (or, if they should only have access to one to begin with, compute which one that is), and then once they navigate in to the specific "app", just bind the data sources to the relevant NSF instead of assuming each view or document lives in the same NSF that the code does.
It's still a good idea to be aware of the Extensibility API, not only for the sake of code reusability, but also to understand just how much of the behavior of the platform truly is within our control now -- provided, of course, that we're willing to occasionally write some custom Java code. But if you shift away from the one-to-one mapping between code and data that we've habitually maintained in Domino for so long, I can practically guarantee that you'll prefer this approach... both for the ease of implementation and maintenance, and for the comparative simplicity it offers to end users.
You can combine the template technique and the all-code-in-one-database approach:
Divide the application design into two parts: a data part and a code part.
The data part contains all Notes views. If it's an classic Notes application it would contain also all design elements for Notes client like Forms, Subforms, Frames and so on.
The code part contains all XPages, Custom Controls, CSS, client/server JavaScript libraries, Themes, images, jars and so on.
Put your 75% common code into masterData.ntf and masterCode.ntf.
The application code databases appCodeX.ntf inherit all design elements of masterCode.ntf and contain the additional application specific design elements.
The code from all application templates gets united in allCode.ntf. It inherits all from masterCode.ntf and inherits the additional pieces of code from application templates.
Based on that you create an allCode.nsf.
On the data side you use the classic template way.
From here you have to possibilities:
You use Single Copy XPage Design - connect every appData database with allCode.nsf
You connect your XPages in allCode.nsf with appData databases
I prefer the latter. You can define in allCode.nsf where all the application data databases are located, e.g. in property documents.
With the approach showed in picture you're still able to separate application easily e.g. in case you want to sell them. You have already a separate template for every single application.

How do I keep value objects from the server?

I communicate with the server through jsons, which both in Nodejs and in Actionscript are objects (serialized through string).
Those objects I use in my client, by reading / modifying them and also creating secondary objects (from Classes) relative to what came from the server.
I have one of two options to design my client and I am stuck at deciding which of them is more flexible/futureproof.
Keep data as it comes, create many methods to modify the objects, keep secondary objects somewhere separate.
Convert the data into instances of classes where each class has its own group of methods instead of piling the methods in the same place.
Usually I go with 2 because OOP is delicious but going with 1 seems much simpler in terms of quantity.
I guess my problem is that I can't figure out if my client is basically a View (from MVC) where the server is the Control (also from MVC), or if my client and server are two independent / separate projects that communicate, and I should consider the client as a MVC project in itself.
I would appreciate your 2 cents.
From your question it's not clear what 1. and 2. differ but looks like 1. is tightly coupled while 2. has better separation of concerns.
It depends on your application. Do you need to create client heavy app with rich UI/UX elements, or maybe a mobile app where bandwidth is limited? If the answer is yes, then go with a second approach (2.): build your MVC like structure or use existing MV* libraries, like Ember, Angular, Backbone, Knockout, etc.
If you need SEO support and don't have much of fron-end code, then rendering on the server-side is still an option. Even with this approach ORM like Mongoose can come in handy.
PS: JavaScript doesn't really have classes, because objects inherit from other objects. You can use prorotypal inheritance patterns for that.

Help w/ DDD, SOA and PI

Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.

Help understanding saving data please. Core data vs plist

Is every app that allows users to input data built with core data?
I've built a "grocery list" type of table view app where you name the list and then in a detail view add items to the list. Simple.
What I don't get is this, based on an iphone development book the example saves the data to a plist using dictionaries.
I've learned that it works on the simulator but not the device because the data is saved to the application bundle not the document directory (which was new to me!)
On the device the app works great except-it won't HOLD the data.
Is core data or sqlite the only solution?
Is every app that allows users to input data built with core data?
Note that your question as posed is incorrect, as it assumes that CoreData is tied to SQLite and is an alternative to plists.
CoreData is a framework for object lifecycle and graph management. It provides implementation of common tasks like changes tracking and propagation, consistency enforcement, data validation and so on.
The CoreData framework is a separate from the object persistence layer and can use different serialization implementations, including SQLite and XML (plists).
For more details, read Core Data Programming - Persistent Store Features.
The decision whether you should use CoreData should be based on whether you need any of the features it provides. If you need to serialize simple object graphs, without consistency requirements, you can use standard NSDictionary to serialize your data in a simple plist file in any of the application-writable folders. Otherwise, use CoreData, and choose the proper persistent store based on the type of data you will be storing.
From what I've seen around the internet, you can use Core Data (which gives you the options of SQLite, atomic, and XML), you can use NSKeyedArchivers and NSKeyedUnarchivers (http://www.vimeo.com/1454094) or you can store the data inside the local application folder (possibly using a serialization method). It looks like Core data is the best solution, but a more complex one to implement. For a simple app, as yours is, I think serializing data and storing it in the local app directory would be perfect.
I am surprised that your book is showing an example where user data is written to the app bundle. Actually, I'm a little surprised that that is even possible.
You should be able to write your data to an NSDictionary (or NSMutableDictionary) and then write that to your app's Documents directory, using -writeToFile:atomically:
Reading data back in should also be straightforward, using -initWithContentsOfFile:.
For someone just getting started, I would recommend keeping it simple. Working NSDictionary is very simple, though you have to manage things like the list of lists and how to name lists that are stored in Documents directory, etc.
Ultimately, using Core Data would probably be a better approach. It offers more flexibility and more power - but, as ever, those advantages come at a cost.
Your question is very important to the community in the respect that
you are asking a strategic question: which technology do I use, when?
Core Data is best for the day-to-day work of a list-based app. Core data is built to mirror the storage of data, similar to how databases work. Relational structures, sorting, key indexing and other row-based attributes are best supported by Core Data.
Property Lists (*.plist) is best suited to one-time updates to critical environmental settings. The user, for example, can optionally set .plist attributes through IOS Settings app. So passwords, account settings, email addresses, and configuration options can be set here nicely. This kind of data is very different from frequently-updated, transactional data.
XML Persistence is closely related to .plist, in that the property list (or .plist) is an xml file in itself. Hence, you could download a stream of xml data, then use it in your app using the same programming rubric as you would, adjusting a property list. Hence, receiving xml data from the web, or uploading such a list, maps nicely to xml persistence.
AWS also proposed the AWS-Persistence library, to support synchronizing your core data collections with their online databases. This could provide helpful by 1) having a user populate data locally via Core Data, then lazily/opportunistically uploading the list. For your purposes (grocery shopping list), this could provide immediacy to the user, while giving your server an interesting big-data opportunity (analyze user transactions, provide recommendations, sell ads, etc).
Hope this gets future visitors tapping into the wealth of what IOS provides -- peace!

Resources