I am implementing a core data data version changes for one of my ipad applications. Apparently some user of my app has more than 1GB large database. As a result doing data migration using "light weight" will blow up the memory. Therefore, I was trying to do a customized data migration with multiple passes(Suggested by Apple). However, I am not sure how would I divide one mapping model into several small mapping models(ideally one per each entity), since on the generated mapping model entity mappings are all related.
I won't be able to post a image because of I am new to stack overflow
Inside of the mapping models I added two more mappings. For one DataMedia, I need to create two ASData to store the media binary data in a separate table. The large data is store initially in "DataMedia" table(in a worst case that table is almost 800MB large).
So here is my question:
1. What is the best way to do this migration with out blowing out the memory?
2. Is multiple passes migration a solution? if so how do I divided the entity mappings with relationships to each-other into separate mapping models? Does that mean that I need to manually implement the "Relationship Mapping"?
I have an app which uses Core Data. I have defined the Entities with their respective attributes several times. Now, I pretty much have it finalized, looking like this:
I deleted the old sqlite d/b, re-ran the program which creates a new Sqlite d/b, and it looks like this (using SQLite Database Browser). The areas highlighted in yellow are the ones that don't belong there (IMHO)... how do I clear the old junk out of there when the Sqlite d/b is re-built from Core Data?
The motivation is quite simple.
When you use entity inheritance, Core Data, under the hood, creates a (relational) table that has all the properties for the parent entity as well as its child (or children).
Although this feature is very useful, you should be aware of such a mechanism to avoid performance penalties.
Anyway, you should not work with the db created for you. You should think only in terms of object graph. You will simplify your life.
Hope that helps.
In http://www.yesodweb.com/book/persistent there is no mention of SQL views.
I (even in imperative languages) have been very fond of immutable database schema design. i.e. only INSERTs and SELECTs - UPDATEs and DELETEs are not used.
This has the advantage of preserving all history, at the expense of making the current 'state' a relatively expensive pure function of the history in the DB.
e.g. there is not a 'user' table, just 'user_created', 'user_password_updated' and 'user_deleted' tables, which are unified in a 'user' SQL VIEW, showing the current state of users.
How should I work with VIEWs in Persistent? Should I use Persistent at all - is it (ironically for Haskell) too tightly focused on a mutable DB for my use-case?
It's been a long time since the question was asked, but I thought it was worth
responding because seven years later the answer has not changed and I really like
your idea about keeping the old versions of tables around and reading them with
views! One drawback of this approach is that using Template Haskell in persistent
will slow down compile times a lot. Once I had a database of about 50 tables in
persistent Template Haskell and it took over half an hour to compile if it was
ever changed.
Yesod's persistent does not support
SQL views and I doubt it ever will because it intends to be database agnostic.
Currently it looks like it supports CouchDB, MongoDB, MySQL , PostgreSQL, Redis
and SQLite. Not all of these databases support SQL style views so it would be
hard to abstract over all of them.
Where persistent excels at is at providing an easy way to create a set of
Haskell types that serialize to and
from different databases. It provides with type class instances and functions to
do single table queries and these work really well. If you want to do join queries on an SQL database that
you are interfacing with persistent, then you can use esqueleto,
a type safe EDSL for SQL join queries.
As far as handling SQL Views in Haskell, I have not come across any tool yet.
You can either use rawQuery which will work but be harder to maintain or
you can build your own tool around one the Haskell DB interfaces like postgresql-simple,
which is what persistent does. In fact, you can even start with the persistent
source code of whatever database you are using and build an SQL View EDSL as you
need. In a closed-source project I helped build a custom PostgreSQL
interface based on some of persistent's ideas and types, but without using
an Template Haskell because the compile time was too slow.
I'm trying to understand choices for code generation tools/ORM tools and discover what solution will best meet the requirements that I have and the limitations present.
I'm creating a foundational solution to be used for new projects. It consists of ASP.NET MVC 3.0, layers for business logic and data access. The data access layer will need to go against Oracle for now, and then switch to SQL this year as the db migration is finished.
From a DTO standpoint mapping to custom types in the solution, what ORM/code generation tool will work with creating my needed code but can ONLY access Stored Procs in Oracle and SQL.?
Meaning, I need to generate the custom objects that are the artifacts from and being pushed to the stored procedures as the parameters, I don't need to generate the sprocs themselves, they already exist. I'm looking for the representation of what the sproc needs and gives back to be generated into DTOs. In some cases I can go against views and generate DTOs. I'm assuming most tools already do this. But for 90% of the time, I don't have access directly to any tables or views, only stored procs.
Does this make sense?
ORMs are best at mapping objects to tables (and/or views), not mapping objects to sprocs.
Very few tools can do automated code generation against whatever output a sproc may generate, depending on the complexity of the sproc. It's much more straight-forward to code generate the input to a sproc as that is generally well defined and clear.
I would say if you are stuck with sprocs, your options for using third party code to help reduce your development and maintenance time are severely limited.
I believe either LinqToSql or EntityFramework (or both?) are capable of some magic with regards to SQL Server to try to mostly automatically figure out what a sproc may be returning. I don't think it works all the time, it's just sophisticated guess work and I seriously doubt it would work with Oracle. I am not aware of anything else software-wise that even attempts to figure out what a sproc may return.
A sproc can return multiple diverse record sets that can be built dynamically by the sproc depending on the input and data in the database. A technical solution to automatically anticipating sproc output seems like it would require the following:
A static set of underlying data in the database
The ability to pass all possible inputs to the sproc and execute the sproc without any negative impact or side effects
That would give you a static set of possible outputs for any given valid input. A small change in the data in the database could invalidate everything.
If I recall correctly, the magic Microsoft did was something like calling the sproc passing NULL for all input parameters and assuming the output is always exactly the first recordset that comes back from the database. That is clearly an incomplete solution to the problem, but in simple cases it appears to be magic because it can work very well some of the time.
I am building a tool that searches people based on a number of attributes. The values for these attributes are scattered across several systems.
As an example, dateOfBirth is stored in a SQL Server database as part of system ABC. That person's sales region assignment is stored in some horrible legacy database. Other attributes are stored in a system only accessible over an XML web service.
To make matters worse, the the legacy database and the web service can be really slow.
What strategies and tips should I consider for implementing a search across all these systems?
Note: Although I posted an answer, I'm not confident its a great answer. I don't intend to accept my own answer unless no one else gives better insight.
You could consider using an indexing mechanism to retrieve and locally index the data across all the systems, and then perform your searches against the index. Searches would be an awful lot faster and more reliable.
Of course, this just shifts the problem from one part of your system to another - now your indexing mechanism has to handle failures and heterogeneous systems, but that may be an easier problem to solve.
Another factor is how often the data changes. If you have to query data in real-time that goes stale very quickly, then indexing may not be practical.
If you can get away with a restrictive search, start by returning a list based on the search criteria corresponding to the fastest data source. Then join up those records with the other systems and remove records which don't match the search criteria.
If you have to implement OR logic, this approach is not going to work.
While not an actual answer, this might at least get you partway to a workable solution. We had a similar situation at a previous employer - lots of data sources, different ways of accessing those data sources, different access permissions, military/government/civilian sources, etc. We used Mule, which is built around the Enterprise Service Bus concept, to connect these data sources to our application. My details are a bit sketchy, as I wasn't the actual implementor, just an integrator, but what we did was define a channel in Mule. Then you write a simple integration piece to go between the channel and the data source, and the application and the channel. The integration piece does the work of making the actual query, and formatting the results, so we had a generic SQL integration piece for accessing a database, and for things like web services, we had some base classes that implemented common functionality, so the actual customization of the integration piecess was a lot less work than it sounds like. The application could then query the channel, which would handle accessing the various data sources, transforming them into a normalized bit of XML, and return the results to the application.
This had a lot of advantages for our situation. We could include new data sources for existing queries by simply connecting them to the channel - the application didn't have to know or care what data sources where there, as it only looked at the data from the channel. Since data can be pushed or pulled from the channel, we could have a data source update the application when, for example, it was updated.
It took a while to get it configured and working, but once we got it going, we were pretty successful with it. In our demo setup, we ended up with 4 or 5 applications acting as both producers and consumers of data, and connecting to maybe 10 data sources.
Have you thought of moving the data into a separate structure?
For example, Lucene stores data to be searched in a schema-less inverted indexed. You could have a separate program that retrieves data from all your different sources and puts them in a Lucene index. Your search could work against this index and the search results could contain a unique identifier and the system it came from.
http://lucene.apache.org/java/docs/
(There are implementations in other languages as well)
Have you taken a look at YQL? It may not be the perfect solution but I might give you starting point to work from.
Well, for starters I'd parallelize the queries to the different systems. That way we can minimize the query time.
You might also want to think about caching and aggregating the search attributes for subsequent queries in order to speed things up.
You have the option of creating an aggregation service or middleware that aggregates all the different systems so that you can provide a single interface for querying. If you do that, this is where I'd do the previously mentioned cache and parallize optimizations.
However, with all of that it you will need weighing up the development time/deployment time /long term benefits of the effort against migrating the old legacy database to a faster more modern one. You haven't said how tied into other systems those databases are so it may not be a very viable option in the short term.
EDIT: in response to data going out of date. You can consider caching if your data if you don't need the data to always match the database in real time. Also, if some data doesn't change very often (e.g. dates of birth) then you should cache them. If you employ caching then you could make your system configurable as to what tables/columns to include or exclude from the cache and you could give each table/column a personalizable cache timeout with an overall default.
Use Pentaho/Kettle to copy all of the data fields that you can search on and display into a local MySQL database
http://www.pentaho.com/products/data_integration/
Create a batch script to run nightly and update your local copy. Maybe even every hour. Then, write your query against your local MySQL database and display the results.