I use Cocoa's Core Data framework which has the possibility of writing the data to XML via NSXMLStoreType.
For Copy & Paste in my app would I now like to write some core data objects to NSPasteboard and read it from there again. I thought that it should be able to read and write the in-build XML representation. Of course could I create a Codable interface for my core data classes, but I rather reuse the core data implementation.
How can I do this best?
Many thanks in advance!
The problem with this strategy is that the details of the XML store's schema implementation is internal to Apple. If you're going to use the results with another XML store, you should be ok. But I wouldn't expect the XML schema Apple uses to lend itself to being useful outside of that context, as it is written to disk, or depend on it not to change.
You can specify the store type when configuring an instance of NSPersistantContainer by setting its persistentStoreDescriptions property. NSPersistentStoreDescription has a type property, which can be set to NSXMLStoreType.
Related
In my backend, I have data attributes labeled in camelcase:
customerStats: {
ownedProducts: 100,
usedProducts: 50,
},
My UI code is set up in a way that an array of ["label", data] works best most of the time i.e. most convenient for frontend coding. In my frontend, I need these labels to be in proper english spelling so they can be used as is in the UI:
customerStats: [
["Owned products", 100],
["Used products", 50],
],
My question is about best practices or standards in web development. I have been inconsistent in my past projects where I would convert the data at random places, sometimes client-side, sometimes right on the backend, sometimes converting it one way and then back again because I needed the JSON data structure.
Is there a coding convention how the data should be supplied to the frontend?
Right now all my data is transfered as JSON to the frontend. Is it best practice to convert the data to the form that is need on the frontend or backend? What if I need the JSON attributes to do further calculations right on the client?
Technologies I am using:
Frontend: Javascript / React
Backend: Javascript / Node.js + Java / Java Spring
Is there a coding convention for how to transfer data to the front end
If your front end is JavaScript based, then JSON (Java Script Object Notation) is the simplest form to consume, it is a stringified version of the objects in memory. See this healthy discussion for more information on JSON
Given that the most popular front end development language is JavaScript these days, (see the latest SO Survey on technology) It is very common and widely accepted to use JSON format to transfer data between the back and front end of solutions. The decision to use JSON in non-JavaScript based solutions is influenced by the development and deployment tools that you use, seeing more developers are using JavaScript, most of our tools are engineered to support JavaScript in some capacity.
It is however equally acceptable to use other structured formats, like XML.
JSON is generally more light-weight than XML as there is less provision made to transfer meta-data about the schema. For in-house data streams, it can be redundant to transfer a fully specced XML schema with each data transmission, so JSON is a good choice where the structure of the data is not in question between the sender and receiver.
XML is a more formal format for data transmission, it can include full schema documentation that can allow receivers to utilize the information with little or no additional documentation.
CSV or other custom formats can reduce the bytes sent across the wire, but make it hard to visually inspect the data when you need to, and there is an overhead at both the sending and receiving end to format and parse the data.
Is it best practice to convert the data to the form that is need on the frontend or backend?
The best practice is to reduce the number of times that a data element needs to be converted. Ideally you never have to convert between a label and the data property name... This is also the primary reason for using JSON as the data transfer format.
Because JSON can be natively interpreted in a JavaScript front end, in a JavaScript front end we can essentially reduce conversion to just the server-side boundary where data is serialized/deserialized. There is in effect no conversion in the front end at all.
How to refer to data by the property name and not the visual label
The general accepted convention in this space is to separate the concerns between the data model and the user experience, the view. Importantly the view is the closest layer to the user, it represents a given 'point of view' of the data model.
It is hard to tailor a code solution for OP without any language or code provided for context, in an abstract sense, to apply this concept means to not try and have the data model carry the final information about how the data should be displayed, instead you have another piece of code that provides the information needed to present the data.
In different technologies and platforms we refer to this in different ways but the core concept of separating the Model from the View or Presentation is consistently represented through these design patterns:
Exploring the MVC, MVP, and MVVM design patterns
MVP vs MVC vs MVVM vs VIPER
For OP's specific scenario, this might involve a mapping structure like the following:
customerStatsLabels: {
ownedProducts: "Owned products",
usedProducts: "Used products",
}
If this question is updated with some code around how the UI is constructed I will update this response with something more specific.
NOTE:
In JavaScript, objects are simply arrays of arrays, and as such it is very easy to tweak existing code that is based on arrays, into code based on objects and vice-versa.
I understand how to use the FlatBufferBuilder and specific type builder (e.g., MyNestedTableBuilder) to get the WIPOffset and then use that to get the finished_data buffer (&[u8]). I then have been using get_root to get an object based on the buffer, so now I have an instance of MyNestedTable. Then I need to pass that to another function and create a new table instance via a new builder, MyTable, that has the field add_my_nested_table. I cannot see how to do this without unpacking MyNestedTable and rebuilding it again (which seems very inefficient). I am sure there is a good way to do this, I just haven't found it, even after studying the generated code and API.
Generally we have a need to pass objects around and reuse them, over the network or via API calls in Rust.
MyNestedTable isn't really an object, it is a handle to data inside the serialized data (your [u8]), and any field accesses look up this data on the fly.
None of the base APIs for any of the FlatBuffers supported languages (including Rust) have code generated that allows automatic re-serializing, since that is not a frequent operation in most use cases (you already have the serialized data).
The way to do it is through the optional "object API", supported in C++ and some other languages, but not yet in Rust. As you can see, CasperN is working on such an API.
Until then, you may consider using nested_flatbuffer or some other construct to directly pass the serialized data to wherever it needs to go.
I am trying to store a plist and several binary files (let's say images) as part of an UIManagedDocument. The name of the binary files are an attribute in Core Data and I don't need to enumerate them, just access the right one when showing the related entity.
The file structure that I want to have is:
- <File yyyyMMdd-HHmmss>.extdoc
- StoreContent
- persistentStore
- AdditionalContent
- ListStatus.plist (used to store per document defaults)
- Images
- uuid1.png
- uuid2.png
- ...
- uuidn.png
So far, I have successfully followed the instructions in How do I save additional content into my UIManagedDocument file packages?, but when I try to add the binary files there are some things that I don't know how to do.
Should I treat the URL /the/path/File yyyyMMdd-HHmmss.extdoc/AdditionalContent (the default one provided with readAdditionalContentFromURL:error:) as a NSFileWrapper? Are there any advantages/disadvantages vs just using the URLs? I find it more complicated to use the file wrapper, since the plist has to be read using the file wrapper accessors and NSCoder (I guess), and the files, I have to store the file wrapper for the Images directory and then obtain the corresponding node with objectForKey (I assume). But Apple's Document-Based Apps Programming Guide for iOS regarding custom formats instead of NSData or NSFileWrapper, states "Keep in mind that your code will have to duplicate what UIDocument does for you, and so you must deal with greater complexity and a greater possibility of error." Am I misunderstanding this?
Per document defaults are declared as properties: the setter modifies the NSDictionary that maps the plist and marks the document as updated, and the getter accesses the dictionary with the proper key. How do I expose the ability to read/write the binary files? Should I add a method to my subclass of UIManagedDocument? - (void)writeImage:(NSString*)uuid; and -(UIImage *)readImage:(NSString *)uuid; And should I keep this data in memory until the document is saved? How?
Assuming that NSFileWrapper is the way to go, if I plan to use this document with iCloud should I use file coordinators with the file wrapper? If so, how?
Any source code for each question will be greatly appreciated. Thank you.
P.S.: I know that I could save some binary data inside of Core Data, but I don't feel comfortable with that solution. Among other reasons, I rather store the PNG data for image files that a serialized version of UIImage that won't be compatible with NSImage if I want to create a desktop app.
I'd like to say that, in general I rather like UIManagedDocument. It has a few advantages over raw Core Data. For example, it sets up the entire core data stack for you automatically. It also sets up nested managed object contexts for you, so you get free background saving. None of that is particularly earth-shattering, but it's a lot of functionality from a tiny amount of code.
I haven't played around with saving additional information...but here are my thoughts.
First, you shouldn't need to treat the new URL as a file wrapper. You should just be able to do regular file operations on the provided URL. Just make sure you have everything implemented properly in additionalContentForURL:error:, writeAdditionalContent:toURL:originalContentsURL:error: and readAdditionalContentFromURL:error:. The read and write operations need to be symmetric. And you should probably snapshot your data in additionalContentsForURL:error: so that everything will be saved in a known, good state (since the save operations are asynchronous).
As an alternative, have you considered using the Store in External Record File flag in your data model instead of saving it manually? This should force Core Data to (depending on the size of the binary data) automatically store them externally. I looked at the release notes, and I didn't see anything saying you couldn't use this feature with iCloud. That might be the easiest fix.
Attacking a side point for the moment (as I have not had ANY good experience with UIManagedDocument).
You can save the binary inside of Core Data for a iOS 5.0+ application using the external file reference. Then you can save the PNG of the image to Core Data directly and not need to worry about a UIManagedDocument or about bloating the sqlite file.
There is nothing stopping you from storing the PNG instead of a UIImage.
One other thought. You may need to use an NSFileCoordinator for the read and write operations. Technically, any read or write operations in the iCloud container need to use a file coordinator (to coordinate with the iCloud sync service--this prevents accidentally corrupting a file by reading it while another process is writing to it).
I know that UIDocument wraps most of its input and output methods automatically. I'd guess that these methods are similarly wrapped (since they give you a URL to use)--However, the docs aren't very clear.
I am facing a really weird problem! In my application which is using CoreData, I created a model. When the application ran the database was created. I openend the database using Firefox SqliteManager and added another column to it "ZABOUT" (For some reason the Core Data prefix columns with Z). Anyway, I also populated the ZABOUT column with some data. But now when I retrieve the object from within the app and display the value of ZABOUT it displays null. Any ideas what is going on.
If I set the value from within the iPhone application then it prints out okay.
Core Data and SQLite are separate technologies, and you can't mix them that way. If you want to add another entity, you need to do it through your Core Data model, not through SQLite.
Core Data configures the sqlite file it uses as a backing store with a private schema that is derived from it's data model. Because the Core Data framework is controlling the schema and contents of the .sqlite file, it's best not to think of it as an sqlite file. The fact that SQLite is used is essentially an implementation detail.
More on Core data, and what Core Data is Not. Core Data is not a relational database, and if that's what you're looking for, you might consider just using SQLite directly. XMLPerformance is probably the best sqlite-on-iOS sample.
I'm getting ready to dive into my first Core Data adventure. While evaluating the framework two questions came up that really got me thinking about using Core Data at all for this project or to stick with SQLite.
My app will heavily rely upon importing data from an external source. I'm aware that one can import into Core Data but handling complex relationships seems complicated and tedious. Is there an easy way to accomplish complex imports?
The app has to be able to execute complex queries spanning multiple tables or having multiple conditions. Building these predicates and expressions simply scares me...
Is it worth to take the plunge and use Core Data or should I stick with SQLite?
As I and others have said before, Core Data is really an object-graph management framework. It manages the relationships between model objects, including constraints on their cardinality, and manages cascading deletes etc. It also manages constraints on individual attributes. Core Data just happens to also be able to persist that object graph to disk. It can do this in a number of formats, including XML, binary, and via SQLite. Thus, Core Data is really orthogonal to SQLite. If your task is dealing with an embedded SQL-compatible database, go with SQLite. If your task is managing the model layer of an MVC app, go with Core Data. In specific answers to your questions:
There is no magic that can automatically import complex data into any model. That said, it is relatively easy in Core Data. Taking a multi-pass approach and using the SQLite backend can help with memory consumption by allowing you to keep only a subset of the data in memory at a time. If the data sets can be kept in memory, you can write a custom persistent store format that reads/writes directly to your legacy data format from within Core Data (see the Atomic Store Programming Guide).
Building a complex NSPredicate declaratively is somewhat verbose but shouldn't scare you. The Predicate Programming Guide is a good place to start. You can, of course, also write predicates using a string format, much like a string-formatted SQL statement. It's worth noting that, as described above, the predicates in Core Data are on the objects and object graph, not on the SQL tables. If you really want to think at the level of tables, stick with SQLite and write your own wrapper.
I can't really speak to your first point.
However, regarding your second point, using Core Data means you don't have to really worry about complex queries since you can just pretend that all the relationships are properly established in memory already (Apple's implementation details aside). It doesn't matter how complex a join it might be in a database environment because you really aren't in a database environment. If you need to get the fourth child of the grandparent of your current object and then find that child's pet's name and breed, all you do is traverse up the object tree in code using a series of messages or properties. No worries about joins or anything. The only problem is it might be really slow depending on your objects' relationships, but I can't really speak accurately to that since I haven't actually implemented anything using Core Data (I've just read about it extensively on Apple's and others' websites).
If the data importer from an external source is written based on the same core data model (for the targeted/destination side of the import) - nothing will be conceptually different as compare to using/updating the same data (through the core data stack from your actual application).
If you create the data importer without using the core data stack, make sure you learn well the db schema that would be generated/expected by the core data based model. There is nothing magic there - just make sure you follow how the cross entity relationships are implemented and how entity hierarchies are stored.
I had to create recently a data importer from Access database into the core data based Sqlite store as a .NET app. Once my destination core data model was define, I created a small app that populated the Sqlite store with randomly generated entities (including all the expected relationships). Then, I reverse engineered how the core data actually created the Sqlite store for the model and how it handles the relationships by learning from the generated and persisted data. Then, I implemented the .NET based importer/data-transformer according to my observations. At the end, I got perfect core data friendly data store that could be open an modified from the application that was using the core data stack on Mac OSX.