i'm working on a app where i have nested tree structure i.e. as below
Main List ---> Multiple Child(s) List
--------- -----------
SchoolOne ---> department1
---> department2
---> depertment3
---> and so on
SchoolTow ---> depratment1
---> department2
---> department3
---> department4
---> and so on
the Main list will need to be displayed in UITableview and upon clicking that i would show child links and again most likely in UITableView
also main list will be entered manually with dedicated name and after that it can add child list using master list (similiar to sample master list app from xcode 4.2)
i'm struggling to understand which one will be the better solution property list or Coredata or SQLite, since i'm new to iOS dev i'm confused on overall data structure gelling toghther.
also schoolNew can copy departments from existing Child lists of existing Main list i.e. SchoolOne or two etc...
can some one help with indications and tutorials so i can get better view on nesting this easy way?
For any kind of data storage and modelling work you should look at Core Data.
A property list will work, but you'll need to load it all into memory and write changes back yourself. Sqlite will work, but you'll be messing about with table rows and queries.
Core Data lets you store your data and retrieve your data in ways that are supported and well optimised for iOS. You don't need to think in terms of tables and joins and queries (as you would with sqlite), you can think of objects and relationships between objects. You'll be using a UITableView showing master-detail views - there is the NSFetchedResultsController class that is designed to support this along with Core Data.
Related
Not really sure this is an explicit question or just a query for input. I'm looking at Azure Data Factory to implement a data migration operation. What I'm trying to do is the following:
I have a No SQL DB with two collections. These collections are associated via a common property.
I have a MS SQL Server DB which has data that is related to the data within the No SQL DB Collections via an attribute/column.
One of the NoSQL DB collections will be updated on a regular basis, the other one on a not so often basis.
What I want to do is be able to prepare a Data Factory pipline that will grab the data from all 3 DB locations combine them based on the common attributes, which will result in a new dataset. Then from this dataset push the data wihin the dataset to another SQL Server DB.
I'm a bit unclear on how this is to be done within the data factory. There is a copy activity, but only works on a single dataset input so I can't use that directly. I see that there is a concept of data transformation activities that look like they are specific to massaging input datasets to produce new datasets, but I'm not clear on what ones would be relevant to the activity I am wanting to do.
I did find that there is a special activity called a Custom Activity that is in effect a user defined definition that can be developed to do whatever you want. This looks the closest to being able to do what I need, but I'm not sure if this is the most optimal solution.
On top of that I am also unclear about how the merging of the 3 data sources would work if the need to join data from the 3 different sources is required but do not know how you would do this if the datasets are just snapshots of the originating source data, leading me to think that the possibility of missing data occurring. I'm not sure if a concept of publishing some of the data someplace someplace would be required, but seems like it would in effect be maintaining two stores for the same data.
Any input on this would be helpful.
There are a lot of things you are trying to do.
I don't know if you have experience with SSIS but what you are trying to do is fairly common for either of these integration tools.
Your ADF diagram should look something like:
1. You define your 3 Data Sources as ADF Datasets on top of a
corresponding Linked service
2. Then you build a pipeline that brings information from SQL Server into a
temporary Data Source (Azure Table for example)
3. Next you need to build 2 pipelines that will each take one of your NoSQL
Dataset and run a function to update the temporary Data Source which is the ouput
4. Finally you can build a pipeline that will bring all your data from the
temporary Data Source into your other SQL Server
Steps 2 and 3 could be switched depending on which source is the master.
ADF can run multiple tasks one after another or concurrently. Simply break down the task in logical jobs and you should have no problem coming up with a solution.
I have 2 data source(db1, db2) and 2 dataset. 2 dataset are store procedure from each data source.
Dataset1 must run first to create a table for dataset 2 to update and show (dataset 1 will show result too).
Cause the data of the table must base on some table in DB1, the store procedure will create a table to db2 by using link server.
I have search online and tried "single transaction" in data source, but it show error in data set 1 with no detail.
Is there anyway to do it? cause I want to generate an excel with 2 sheet for this result.
Check out this this post.
The default behavior of SSRS is to run the dataset at the same time. They are run in the order in which they are presented in your rdl (top down when looking at it in the report data area). Changing the behavior of a single data source with multiple datasets is as simple as clicking on a checkbox in data source dialog.
With multiple datsources it is a little bit more tricky!
Here is the explanation from the MSDN Blog posted above:
Serializing dataset executions when using multiple data source:
Note that datasets using different data sources will still be executed in parallel; only datasets of the same data source are serialized when using the single transaction setting. If you need to chain dataset executions across different data sources, there are still other options to consider.
For example, if the source databases of your data sources all reside on the same SQL Server instance, you could use just one data source to connect (with single transaction turned on) and then use the three-part name (catalog.schema.object_name) to execute queries or invoke stored procedures in different databases.
Another option to consider is the linked server feature of SQL Server, and then use the four-part name (linked_server_name.catalog.schema.object_name) to execute queries. However, make sure to carefully read the documentation on linked servers to understand its performance and connection credential implications.
This is an interesting question and while I think there might be another way of doing it, it would take a bit of time and playing around with your datasets and more information on your setup of the datasources.
Hope this helps though.
I have an application that uses two merged core data models mapped to two different data stores (both are Sqlite stores) via use of model configurations (each unique configuration within each model is mapped to its own data store). The persistent store coordinator does a good job in saving relevant data into a correct store. However, the problem is that when the stores initially created by core data on very first save operation their data schemas look absolutely identically and correspond to a union of the two merged models.
Is there any way to control core data so it creates the stores solely based on the configuration/model mapped into that store?
I guess not, because if core data can generate partially schemas into different persistent store then it might destroy the relationships between them and thus cause problems. At least in current stage i dont think Apple tends to complete this.
I work with iPhone iOS 4.3.
In my project I need a read-only, repopulated table of data (say a table with 20 rows and 20 fields).
This data has to be fetched by key on the row.
What is better approach? CoreData Archives, SQLite, or other? And how can I prepare and store this table?
Thank you.
I would use core data for that. Drawback: You have to write a program (Desktop or iOS) to populate the persistent store.
How to use a pre-populated store, you should have a look into the Recipes sample code at apple's.
The simplest approach would be to use an NSArray of NSDictionary objects and then save the array to disk as a plist. Include the plist in your build and then open it read only from the app bundle at runtime.
Each "row" would be the element index of the array which would return a dictionary object wherein each "column" would be a key-value pair.
I've done this 2 different ways:
Saved all my data as dictionaries in a plist, then deserialized everything and loaded it into the app during startup
Created a program during development that populates the Core Data db. Save that db to the app bundle, then copy the db during app startup into the Documents folder for use as the Persistent Store
Both options are relatively easy, and if your initial data requirements get very large, it's also proven to be the most performant for me.
I'm converting an app from SQLitePersistentObjects to CoreData.
In the app, have a class that I generate many* instances of from an XML file retrieved from my server. The UI can trigger actions that will require me to save some* of those objects until the next invocation of the app.
Other than having a single NSManagedObjectContext for each of these objects (shared only with their subservient objects which can include blobs). I can't see a way how I can have fine grained control (i.e. at the object level) over which objects are persisted. If I try and have a single context for all newly created objects, I get an exception when I try to move one of my objects to a new context so I can persist it on ots own. I'm guessing this is because the objects it owns are left in the 'old' context.
The other option I see is to have a single context, persist all my objects and then delete the ones I don't need later - this feels like it's going to be hitting the database too much but maybe CoreData does magic.
So:
Am I missing something basic about the way my CoreData app should be architected?
Is having a context per object a good design pattern?
Is there a better way to move objects between contexts to avoid 2?
* where "many" means "tens, maybe hundreds, not thousands" and "some" is at least one order of magnitude less than "many"
Also cross posted to the Apple forums.
Core Data is really not an object persistence framework. It is an object graph management framework that just happens to be able to persist that graph to disk (see this previous SO answer for more info). So trying to use Core Data to persist just some of the objects in an object graph is going to be working against the grain. Core Data would much rather manage the entire graph of all objects that you're going to create. So, the options are not perfect, but I see several (including some you mentioned):
You could create all the objects in the Core Data context, then delete the ones you don't want to save. Until you save the context, everything is in-memory so there won't be any "going back to the database" as you suggest. Even after saving to disk, Core Data is very good at caching instances in the contexts' row cache and there is surprisingly little overhead to just letting it do its thing and not worrying about what's on disk and what's in memory.
If you can create all the objects first, then do all the processing in-memory before deciding which objects to save, you can create a single NSManagedObjectContext with a persistent store coordinator having only an in-memory persistent store. When you decide which objects to save, you can then add a persistent (XML/binary/SQLite) store to the persistent store coordinator, assign the objects you want to save to that store (using the context's (void)assignObject:(id)object toPersistentStore:(NSPersistentStore *)store) and then save the context.
You could create all the objects outside of Core Data, then copy the objects to-be-saved into a Core Data context.
You can create all the objects in a single in-memory context and write your own methods to copy those objects' properties and relationships to a new context to save just the instances you want. Unless the entities in your model have many relationships, this isn't that hard (see this page for tips on migrating objects from one store to an other using a multi-pass approach; it describes the technique in the context of versioning managed object models and is no longer needed in 10.5 for that purpose, but the technique would apply to your use case as well).
Personally, I would go with option 1 -- let Core Data do its thing, including managing deletions from your object graph.