How to add a flatbuffer object to a new object? - rust

I understand how to use the FlatBufferBuilder and specific type builder (e.g., MyNestedTableBuilder) to get the WIPOffset and then use that to get the finished_data buffer (&[u8]). I then have been using get_root to get an object based on the buffer, so now I have an instance of MyNestedTable. Then I need to pass that to another function and create a new table instance via a new builder, MyTable, that has the field add_my_nested_table. I cannot see how to do this without unpacking MyNestedTable and rebuilding it again (which seems very inefficient). I am sure there is a good way to do this, I just haven't found it, even after studying the generated code and API.
Generally we have a need to pass objects around and reuse them, over the network or via API calls in Rust.

MyNestedTable isn't really an object, it is a handle to data inside the serialized data (your [u8]), and any field accesses look up this data on the fly.
None of the base APIs for any of the FlatBuffers supported languages (including Rust) have code generated that allows automatic re-serializing, since that is not a frequent operation in most use cases (you already have the serialized data).
The way to do it is through the optional "object API", supported in C++ and some other languages, but not yet in Rust. As you can see, CasperN is working on such an API.
Until then, you may consider using nested_flatbuffer or some other construct to directly pass the serialized data to wherever it needs to go.

Related

Codesys DTU in library

We are working on a data logging code and I would like to make a library of it.
The only thing that different programs will need to define is the type strict for the sample that the want to save. The library will save a sample every x period. But I don't know exactly how to have an external DTU in the library code? Is possible to declare the DTU as an interface or something similar? There must be a way to do so but not so sure what it is.
CODESYS does not have generics, there is no way around that. Although I am not convinced it would be an elegant or fitting design, there is one way you might be able to do if you really want your type to be visible in your logging library, but it has serious drawbacks.
You can create another library, or more accurately a family of libraries that will all use the same placeholder. Each of these libraries can define a type of the same name. Reference this placeholder from your logging library, so that it will resolve the name. That will allow your library to compile. Then, by controlling which library the placeholder resolves to inside the project, you can select which type is used. The big downside is you cannot use your logging library with two different types inside the same project, as you cannot get a placeholder to resolve to two libraries at the same time.
Personally, I would probably implement the logging library with some sort of log entry writer as a function block that contains size + arrays of bytes and manages the details of actual logging, and then define inherited function blocks in projects/libraries that need logging. Each of these inherited function blocks would give access to the bytes in a typed way (method, exposed reference/pointer) and assign the size based on the data type used (SIZEOF in FB_Init, for instance). This requires a bit of code for each type, but really not much, and new types are easy to add with a 20-second copy+paste+modify.

How to use in-build Core Data XML serialisation?

I use Cocoa's Core Data framework which has the possibility of writing the data to XML via NSXMLStoreType.
For Copy & Paste in my app would I now like to write some core data objects to NSPasteboard and read it from there again. I thought that it should be able to read and write the in-build XML representation. Of course could I create a Codable interface for my core data classes, but I rather reuse the core data implementation.
How can I do this best?
Many thanks in advance!
The problem with this strategy is that the details of the XML store's schema implementation is internal to Apple. If you're going to use the results with another XML store, you should be ok. But I wouldn't expect the XML schema Apple uses to lend itself to being useful outside of that context, as it is written to disk, or depend on it not to change.
You can specify the store type when configuring an instance of NSPersistantContainer by setting its persistentStoreDescriptions property. NSPersistentStoreDescription has a type property, which can be set to NSXMLStoreType.

Hazelcast Portable serialization

I want to use Portable serialization for objects stored in IMap to achieve:
fast indexing during insertion (without deserializing objects and
reflection)
class evolution (versioning)
Is it possible to store my classes without implementing Portableinterface?
Is it possible to store 3rd party classes like Date or BigDecimal (or with nested structure) which can not implement Portable interface, while still being indexable?
You can achieve fast indexing using Portable, yes. You'll also see benefits when you're querying on non-indexed fields since there'll be no full deserialization. VersionedPortable support versioning as well but
You must implement Portable interface
For types that doesn't supported by portable, you need to convert the data to a supported format, For date Long for example. And you need to code serialization/deserialization for each property & handle versioning yourself.
Portable is backward compatible only for read. If you update the data from an app who has a previous version, then you'll lost the new field updates done previously by an app has higher version of the Portable object.
So depends on your exact requirements, you need to chose the correct serialization format.
If versioning is not so important or you can handle it manually, but query performance is, then yes Portable make sense. But if you're planning to use versioning heavily, I would suggest using a backward/forward compatible serialization format like Google Protocol Buffers.
You can check this example to get an idea: https://github.com/gokhanoner/data-versioning-protobuf

Clojure Security: Information Flow + Capability System

Summary
Is there something like http://en.wikipedia.org/wiki/E_programming_language as a DSL inside of Clojure?
Background
I'm aware of:
http://bit.ly/N4jnTI and http://bit.ly/Lm3SSD
However, neither provides what I want.
Context
I'm a big fan of both capability systems and information flow. And I'm wondering if anyone has developed Clojure DSLs for these two techniques. The following would be ideal:
all objects have some tag (say in it's meta table) that lists who has read access to the object
when I want to run a query as user "foo", I set some context var saying "now, use only the capabilities of foo" -- then the function, when it tries to reach objects, either gets the object (if foo has access to it) or nil (if foo does not have access to it). Leaking information bout the existence of objects is not a big deal to me at the moment.
Question
So the question is -- is this something easy to do as a Clojure DSL? Where each object has some capability tag, and we can execute pieces of function/code under certain tags, and the runtime system makes sure that no one gets access to things they're not supposed to access.
Thanks!
you can do this with metadata and preconditions and then create macros to add a DSL/syntax to it, though I would recommend skipping the macros and going for just preconditions and metadata.
Each object would have a piece of metadata with a list of it's capabilities.
Each function would have a precondition that checked the metadata.

Should I use NSFileWrappers in UIManagedDocument?

I am trying to store a plist and several binary files (let's say images) as part of an UIManagedDocument. The name of the binary files are an attribute in Core Data and I don't need to enumerate them, just access the right one when showing the related entity.
The file structure that I want to have is:
- <File yyyyMMdd-HHmmss>.extdoc
- StoreContent
- persistentStore
- AdditionalContent
- ListStatus.plist (used to store per document defaults)
- Images
- uuid1.png
- uuid2.png
- ...
- uuidn.png
So far, I have successfully followed the instructions in How do I save additional content into my UIManagedDocument file packages?, but when I try to add the binary files there are some things that I don't know how to do.
Should I treat the URL /the/path/File yyyyMMdd-HHmmss.extdoc/AdditionalContent (the default one provided with readAdditionalContentFromURL:error:) as a NSFileWrapper? Are there any advantages/disadvantages vs just using the URLs? I find it more complicated to use the file wrapper, since the plist has to be read using the file wrapper accessors and NSCoder (I guess), and the files, I have to store the file wrapper for the Images directory and then obtain the corresponding node with objectForKey (I assume). But Apple's Document-Based Apps Programming Guide for iOS regarding custom formats instead of NSData or NSFileWrapper, states "Keep in mind that your code will have to duplicate what UIDocument does for you, and so you must deal with greater complexity and a greater possibility of error." Am I misunderstanding this?
Per document defaults are declared as properties: the setter modifies the NSDictionary that maps the plist and marks the document as updated, and the getter accesses the dictionary with the proper key. How do I expose the ability to read/write the binary files? Should I add a method to my subclass of UIManagedDocument? - (void)writeImage:(NSString*)uuid; and -(UIImage *)readImage:(NSString *)uuid; And should I keep this data in memory until the document is saved? How?
Assuming that NSFileWrapper is the way to go, if I plan to use this document with iCloud should I use file coordinators with the file wrapper? If so, how?
Any source code for each question will be greatly appreciated. Thank you.
P.S.: I know that I could save some binary data inside of Core Data, but I don't feel comfortable with that solution. Among other reasons, I rather store the PNG data for image files that a serialized version of UIImage that won't be compatible with NSImage if I want to create a desktop app.
I'd like to say that, in general I rather like UIManagedDocument. It has a few advantages over raw Core Data. For example, it sets up the entire core data stack for you automatically. It also sets up nested managed object contexts for you, so you get free background saving. None of that is particularly earth-shattering, but it's a lot of functionality from a tiny amount of code.
I haven't played around with saving additional information...but here are my thoughts.
First, you shouldn't need to treat the new URL as a file wrapper. You should just be able to do regular file operations on the provided URL. Just make sure you have everything implemented properly in additionalContentForURL:error:, writeAdditionalContent:toURL:originalContentsURL:error: and readAdditionalContentFromURL:error:. The read and write operations need to be symmetric. And you should probably snapshot your data in additionalContentsForURL:error: so that everything will be saved in a known, good state (since the save operations are asynchronous).
As an alternative, have you considered using the Store in External Record File flag in your data model instead of saving it manually? This should force Core Data to (depending on the size of the binary data) automatically store them externally. I looked at the release notes, and I didn't see anything saying you couldn't use this feature with iCloud. That might be the easiest fix.
Attacking a side point for the moment (as I have not had ANY good experience with UIManagedDocument).
You can save the binary inside of Core Data for a iOS 5.0+ application using the external file reference. Then you can save the PNG of the image to Core Data directly and not need to worry about a UIManagedDocument or about bloating the sqlite file.
There is nothing stopping you from storing the PNG instead of a UIImage.
One other thought. You may need to use an NSFileCoordinator for the read and write operations. Technically, any read or write operations in the iCloud container need to use a file coordinator (to coordinate with the iCloud sync service--this prevents accidentally corrupting a file by reading it while another process is writing to it).
I know that UIDocument wraps most of its input and output methods automatically. I'd guess that these methods are similarly wrapped (since they give you a URL to use)--However, the docs aren't very clear.

Resources