NSMutableArray question - nsmutablearray

Can you reset a nsmutable array it after deletions to put it back to its original state with all the initial variables before deletions?

Are you saying that you want to delete everything, and then magically bring it all back? You can't.
If you want to be able to undo stuff, you need to write code that remembers deleted information in anticipation of the possibility of needing to undo.

No, but if memory isn't an issue, you could duplicate the array before the deletion.

Related

How to refresh the timestamp information of a specific Cache?

I went in to a situation where i need to reload the information from the database to a specific cache to do some re-calculation.
I tried the following. After this i am able to read the new information from the DB, but still the save is giving me PXLockViolationException.
this.<VIEW>.Cache.Clear();
this.<VIEW>.Cache.ClearQueryCache();
this.<VIEW>.Select();
Please assist.
I used the
this.Base.SelectTimeStamp();
But will this reload all timestamps? i just need my cache information to be updated. anyone knows about it?
For the case which you described, when you have multiple people opening the same document, consider usage of PXAccumulator. One of it's purposes is to apply deltas to some property of DAC class without invalidating updated record. Here Sergey describes a bit more about internals, also feel free to read documentation on implementing of it.

EMF Merging Two Objects

I have model generated by EMF.
I am writing API over it to provide easier CRUD operation to the users.
For this purpose in constructor of my API classes, I create a working-copy of my ECore Object using EcoreUtil.copy. Then all the operation occur over this working copy.
In case the user calls abandon change. I again create the copy of my original object and re-initialize the working copy object.
In case the user calls the save, I can't do a direct copy of working copy to original, as it won't change the model (copy's eContainer will be null and original model will be intact).
Hence, I want to merge my working copy to original. One of the possible solution for it is to set all the fields of original one by one. But, it can be lengthy and error-prone in case of large number of fields.
What can I do to easily perform merge operation? What are other possible approaches to tackle this problem?
I'm assuming this data can't be edited or even accessed by several users/threads at same time.
If so, then the easiest way to implement such behavior is to use the Change Recorder which is part of the EMF framework.
When the user starts editing the data, you simply attach the Change Recorder to the most outer object in the tree what you want to track (it may be the entire model), and then start recording. The changes will actually be done in the original objects, but in case user calls "abandon change" then you perform the rollback (undo) using the changes that the Change Recorder have collected. In case user calls "save" then you don't need to do anything else since the original objects were already changed, simply dispose the Change Recorder.
Actually there is already the EMF Transactions framework that provides a Transactional Command Stack, which uses internal the Change Recorder to provide the undo and redo functionalities. In your case, you just need to make use of the "undo" when user calls "abandon change".
Creating a copy model is not a nice idea. Probably you can create Compoundcommand and series of commands for each modification done by user and keep appending to the stack for any operations. If user click save, execute commands. If discard is clicked, then do not execute.

Scratch couchdb document

Is it possible to "scratch" a couchdb document? By that I mean to delete a document, and make sure that the document and its history is completely removed from the database.
I do not want to perform a database compaction, I just want to fully wipe out a single document. And I am looking for a solution that guarantees that there is no trace of the document in the database, without needing to wait for internal database processes to eventually remove the document.
(a python solution is appreciated)
When you delete a document in CouchDB, generally only the _id, _rev, and a deleted flag are preserved. These are preserved to allow for eventual consistency through replication. Forcing an immediate delete across an entire group of nodes isn't really consistent with the architecture.
The closest thing would be to use purge; once you do that, all traces of the doc will be gone after the next compaction. I realize this isn't exactly what you're asking for, but it's the closest thing off the top of my head.
Here's a nice article explaining the basis behind the various delete methods available.
Deleting anything from file system for sure is difficult, and usually quite expensive problem. Even more with databases in general. Depending of what for sure means to you, you may end up with custom db, custom os and custom hw. So it is kind of saying I want fault tolerant system, yes everyone would like to have one, but only few can afford it, but good news is that most can settle for less. Similar is for deleteing for sure, I assume you are trying to adress some security or privacy issue, so try to see if there is some other way to get what you need. Perhaps encrypting the document or sensitive parts of it.

Why isnt there a read analog of validate_doc_update in couchdb?

I am posing it as a suggested feature of couchdb because thats is the best way to express what i would like to achieve, and as a rant because i have not found a good reason for its lack:
Why not have a validate_doc_read(doc, userCtx) function so that I can implemen per-document read control? It would work exactly as validate_doc_update works, by throwing an error when you want to deny the read. What am I missing? Has someone found a workaround for per-document read control?
I'm not sure what the actual reason is, but having read validation would make reads very slow, and view indexes very hard to update incrementally (or perhaps impossible meaning that you'd basically have to have a per-user index).
The way to implement what you want is via filtered replication, so you create a new DB with only the documents you want a given user to be able to read.
The main problem to create a validate_doc_read, is how do we work with reduce functions with that behavior.
I can't believe thar a validate_doc_read is the best solution because we will give away one feature in favour of another.
In this way, you must restrict the view access using a proxy.

Supplying UITableView Core Data the old-fashioned way

Does anyone have an example of how to efficiently provide a UITableView with data from a Core Data model, preferable including the use of sections (via a referenced property), without the use of NSFetchedResultsController?
How was this done before NSFetchedResultsController became available? Ideally the sample should only get the data that's being viewed and make extra requests when necessary.
Thanks,
Tim
For the record, I agree with CommaToast that there's at best a very limited set of reasons to implement an alternative version of NSFetchedResultsController. Indeed I'm unable to think of an occasion when I would advocate doing so.
That being said, for the purpose of education, I'd imagine that:
upon creation, NSFetchedResultsController runs the relevant NSFetchRequest against the managed object context to create the initial result set;
subsequently — if it has a delegate — it listens for NSManagedObjectContextObjectsDidChangeNotification from the managed object context. Upon receiving that notification it updates its result set.
Fetch requests sit atop predicates and predicates can't always be broken down into the keys they reference (eg, if you create one via predicateWithBlock:). Furthermore although the inserted and deleted lists are quite explicit, the list of changed objects doesn't provide clues as to how those objects have changed. So I'd imagine it just reruns the predicate supplied in the fetch request against the combined set of changed and inserted records, then suitably accumulates the results, dropping anything from the deleted set that it did previously consider a result.
There are probably more efficient things you could do whenever dealing with a fetch request with a fetch limit. Obvious observations, straight off the top of my head:
if you already had enough objects, none of those were deleted or modified and none of the newly inserted or modified objects have a higher sort position than the objects you had then there's obviously no changes to propagate and you needn't run a new query;
even if you've lost some of the objects you had, if you kept whichever was lowest then you've got an upper bound for everything that didn't change, so if the changed and inserted ones together with those you already had make more then enough then you can also avoid a new query.
The logical extension would seem to be that you need re-interrogate the managed object context only if you come out in a position where the deletions, insertions and changes modify your sorted list so that — before you chop it down to the given fetch limit — the bottom object isn't one you had from last time. The reasoning being that you don't already know anything about the stored objects you don't have hold of versus the insertions and modifications; you only know about how those you don't have hold of compare to those you previously had.

Resources