I made a program which allows to build instances conforming to a certain model and allowing to save those in an xml file following the Alloy standards.
To get the A4Solution objects corresponding to those instances, I then read those xml files using the A4SolutionReader.read method. This worked great, until I stumbled upon a rather big instances, which, when read, causes the following Exception:
Caused by: kodkod.engine.CapacityExceededException: Arity too large (4) for a universe of size 880
I would understand that the analyser would complain when performing an analysis with a huge scope, but here the instance is already provided, so what justifies this exception? and is there another way to get an A4Solution object from my xml file without having this issue ?
Related
We are working on a data logging code and I would like to make a library of it.
The only thing that different programs will need to define is the type strict for the sample that the want to save. The library will save a sample every x period. But I don't know exactly how to have an external DTU in the library code? Is possible to declare the DTU as an interface or something similar? There must be a way to do so but not so sure what it is.
CODESYS does not have generics, there is no way around that. Although I am not convinced it would be an elegant or fitting design, there is one way you might be able to do if you really want your type to be visible in your logging library, but it has serious drawbacks.
You can create another library, or more accurately a family of libraries that will all use the same placeholder. Each of these libraries can define a type of the same name. Reference this placeholder from your logging library, so that it will resolve the name. That will allow your library to compile. Then, by controlling which library the placeholder resolves to inside the project, you can select which type is used. The big downside is you cannot use your logging library with two different types inside the same project, as you cannot get a placeholder to resolve to two libraries at the same time.
Personally, I would probably implement the logging library with some sort of log entry writer as a function block that contains size + arrays of bytes and manages the details of actual logging, and then define inherited function blocks in projects/libraries that need logging. Each of these inherited function blocks would give access to the bytes in a typed way (method, exposed reference/pointer) and assign the size based on the data type used (SIZEOF in FB_Init, for instance). This requires a bit of code for each type, but really not much, and new types are easy to add with a 20-second copy+paste+modify.
I understand how to use the FlatBufferBuilder and specific type builder (e.g., MyNestedTableBuilder) to get the WIPOffset and then use that to get the finished_data buffer (&[u8]). I then have been using get_root to get an object based on the buffer, so now I have an instance of MyNestedTable. Then I need to pass that to another function and create a new table instance via a new builder, MyTable, that has the field add_my_nested_table. I cannot see how to do this without unpacking MyNestedTable and rebuilding it again (which seems very inefficient). I am sure there is a good way to do this, I just haven't found it, even after studying the generated code and API.
Generally we have a need to pass objects around and reuse them, over the network or via API calls in Rust.
MyNestedTable isn't really an object, it is a handle to data inside the serialized data (your [u8]), and any field accesses look up this data on the fly.
None of the base APIs for any of the FlatBuffers supported languages (including Rust) have code generated that allows automatic re-serializing, since that is not a frequent operation in most use cases (you already have the serialized data).
The way to do it is through the optional "object API", supported in C++ and some other languages, but not yet in Rust. As you can see, CasperN is working on such an API.
Until then, you may consider using nested_flatbuffer or some other construct to directly pass the serialized data to wherever it needs to go.
Here some information about context of the problem I facing:
we have a semi-structured (JSON from node.js backend) data in datastore.
after saving an entity,
and getting a list of entities about them soon and even a while later,
returned data does not have one indexed property
I can find the entity by that property value.
I use Google Datastore via node.js client library. #google-cloud/datastore: "^2.0.0".
How it can be possible? I understood when due to eventual consistency some updates can be incompletely written etc. But when I getting same inconsistency - lack of whole property of entity saved e. g. hour ago?
I gone through scenario multiple times for same kind multiple times.
I do not have such issues with other kinds or other properties of that kind.
How I can avoid this type of issues with Google Datastore?
Answer for anyone who may encounter with such issue.
We mostly do not use any DTO (data-transfer objects) or any other wrappers for most of our kinds in this project, but for this one a DTO has been used, mostly to be sure the result objects have default values for properties omitted/absent in entity which usually happens for entities created by older version of code.
After reviewing my own code more carefully, I found a piece of code which is out of sync with other related pieces of code - there was no a line to copy this property from entity to the DTO object.
Side note: Actually all this situation remind me a story or meme about a guy who claimed he found a bug in compiler just because he was not able to find a mistake he made in his code.
I looked around on the web and I couldn’t find any way to deal in a structured way with malformed/faulty records during computation. All I was able to find was the flatMap/Some/None technique + logging.
I’m facing this problem because I have a processing algorithm that extracts more than one value from each record, but can fail in extracting one of those multiple values, and I want to keep track of them. Logging is not feasible because this “warning” happens so frequently that the logs would become overwhelming and impossibile to read.
Since I have 3 different possible outcomes from my processing I modeled it with this class hierarchy:
That holds result and/or warnings.
Since Result implements Traversable it can be used in a flatMap, discarding all warnings and failure results, in the other hand, if we want to keep track of warnings, we can elaborate them and output them if we need.
I am trying to store a plist and several binary files (let's say images) as part of an UIManagedDocument. The name of the binary files are an attribute in Core Data and I don't need to enumerate them, just access the right one when showing the related entity.
The file structure that I want to have is:
- <File yyyyMMdd-HHmmss>.extdoc
- StoreContent
- persistentStore
- AdditionalContent
- ListStatus.plist (used to store per document defaults)
- Images
- uuid1.png
- uuid2.png
- ...
- uuidn.png
So far, I have successfully followed the instructions in How do I save additional content into my UIManagedDocument file packages?, but when I try to add the binary files there are some things that I don't know how to do.
Should I treat the URL /the/path/File yyyyMMdd-HHmmss.extdoc/AdditionalContent (the default one provided with readAdditionalContentFromURL:error:) as a NSFileWrapper? Are there any advantages/disadvantages vs just using the URLs? I find it more complicated to use the file wrapper, since the plist has to be read using the file wrapper accessors and NSCoder (I guess), and the files, I have to store the file wrapper for the Images directory and then obtain the corresponding node with objectForKey (I assume). But Apple's Document-Based Apps Programming Guide for iOS regarding custom formats instead of NSData or NSFileWrapper, states "Keep in mind that your code will have to duplicate what UIDocument does for you, and so you must deal with greater complexity and a greater possibility of error." Am I misunderstanding this?
Per document defaults are declared as properties: the setter modifies the NSDictionary that maps the plist and marks the document as updated, and the getter accesses the dictionary with the proper key. How do I expose the ability to read/write the binary files? Should I add a method to my subclass of UIManagedDocument? - (void)writeImage:(NSString*)uuid; and -(UIImage *)readImage:(NSString *)uuid; And should I keep this data in memory until the document is saved? How?
Assuming that NSFileWrapper is the way to go, if I plan to use this document with iCloud should I use file coordinators with the file wrapper? If so, how?
Any source code for each question will be greatly appreciated. Thank you.
P.S.: I know that I could save some binary data inside of Core Data, but I don't feel comfortable with that solution. Among other reasons, I rather store the PNG data for image files that a serialized version of UIImage that won't be compatible with NSImage if I want to create a desktop app.
I'd like to say that, in general I rather like UIManagedDocument. It has a few advantages over raw Core Data. For example, it sets up the entire core data stack for you automatically. It also sets up nested managed object contexts for you, so you get free background saving. None of that is particularly earth-shattering, but it's a lot of functionality from a tiny amount of code.
I haven't played around with saving additional information...but here are my thoughts.
First, you shouldn't need to treat the new URL as a file wrapper. You should just be able to do regular file operations on the provided URL. Just make sure you have everything implemented properly in additionalContentForURL:error:, writeAdditionalContent:toURL:originalContentsURL:error: and readAdditionalContentFromURL:error:. The read and write operations need to be symmetric. And you should probably snapshot your data in additionalContentsForURL:error: so that everything will be saved in a known, good state (since the save operations are asynchronous).
As an alternative, have you considered using the Store in External Record File flag in your data model instead of saving it manually? This should force Core Data to (depending on the size of the binary data) automatically store them externally. I looked at the release notes, and I didn't see anything saying you couldn't use this feature with iCloud. That might be the easiest fix.
Attacking a side point for the moment (as I have not had ANY good experience with UIManagedDocument).
You can save the binary inside of Core Data for a iOS 5.0+ application using the external file reference. Then you can save the PNG of the image to Core Data directly and not need to worry about a UIManagedDocument or about bloating the sqlite file.
There is nothing stopping you from storing the PNG instead of a UIImage.
One other thought. You may need to use an NSFileCoordinator for the read and write operations. Technically, any read or write operations in the iCloud container need to use a file coordinator (to coordinate with the iCloud sync service--this prevents accidentally corrupting a file by reading it while another process is writing to it).
I know that UIDocument wraps most of its input and output methods automatically. I'd guess that these methods are similarly wrapped (since they give you a URL to use)--However, the docs aren't very clear.