Object Reference Problem in Object Cache System - object

I´m trying to create an object cache system, the languange i´m using is freepascal, but i think this problem will happen in other languages to.
The problem i´m facing is when i will remove an object from cache and this object is referenced by another object in the cache, so i´ll end up with an invalid reference.
I know the only way is to clean the reference in the remaining object when i´m about to remove one from the cache. When i have a few number of this cross references in chache this is not dificult to do.
But when the system grows it become a nightmare, so how can i keep track of the objects who is referencing the one i´m about to remove from cache and clean this dead references ?
Is there some well known techinique to do this without demanding a lot of lookups in the cache?
Thanks in advance.

Related

Is it possible to free a required module in nodejs?

When you import a file in nodejs, it's loaded, evaluated and cached.
Is it possible to free the memory for that file, if you know you will never use it again (or maybe in a long time, worth it to compile it again).
What I want to do is importing a temporal file, read its code, execute it once and then free it forever (I know it's not going to be used again, and I don't want to have memory leaks)
Basically is having dynamic code in nodejs.
Pages like codility which allows you to input code and execute in backend side, should work with a similar solution... unless they run a complete new nodejs instance with that code... and then kill it.
Is it possible? If so, how?
You can delete from the module cache like this. Just make sure that there are no circular dependencies or the module will not actually be freed from memory
delete require.cache[require.resolve('./theModuleYouWantToDelete.js')]
It depends what you mean by "free" the module. Nodejs does not have a way to remove the code once it has been run so that will always remain in memory.
If you remove all references to the module (by deleting it from the cache) and removing any other references there might be to exported data, then any data associated with the module should be eligible for garbage collection.
For a service that lets the user run arbitrary code on the server, I would always run that in a sandboxed separate process where you can kill the process and recover all resources used by the code, not run that in the main server process.
Ok, reading the NodeJS documentation about modules, it happens to exist a public cache member and it says:
Modules are cached in this object when they are required. By deleting a key value from this object, the next require will reload the module. This does not apply to native addons, for which reloading will result in an error.
Adding or replacing entries is also possible. This cache is checked before native modules and if a name matching a native module is added to the cache, no require call is going to receive the native module anymore. Use with care!
So I guess every evaluated module lives here internally, and removing a key from this object like the docs says, it will also free the related memory to that portion of code (when the garbage collector do its job)

Core Data - Re-saving Object in didSave

I have to check for certain properties after saving the object to the database (I need to make sure first that it's saved on disk). So, I thought that the didSave method of NSManagedObject is the best place to do so.
However, after checking for these properties and changing some of them, I want to re-save the object. So, I call the managed object context to save the object one more time. (I made heavy testing to make sure I won't get into an infinite loop).
Now, the problem is the managed object context doesn't perform the second save. How did I know that? Well, first I checked the hasChanged property of the context for the second save and it returns no. Also, the didSave method is not called one more time due to the re-save.
Is there something I am doing wrong? What's wrong with my algorithm?
Note:
I considered willSave at the beginning but as it turned out, willSave is called before validation. The object may not be saved on disk after all. I need to perform my check and new setting after saving to disk.

is there a way to access to NodeJS (V8) GC reference counts?

so i've implemented an experimental cache for my memory-hungry app, and thrown a heap into the mix so i can easily get the least accessed objects once the cache outgrows a certain limit—the idea is to purge the cache from objects that are likely not re-used any time soon and if so, retrieve them from the database.
so far, so fine, except there may be objects that have not yet been written to the database and should not be purged. i can handle that by setting a 'dirty' bit, no problem. but there is another source of problems: what if there are still valid references to a given, cached object lurking around somewhere? this may lead to a situation where function f holds a reference A to an object with an ID of xxx, which then gets purged from cache, and then another function g requests an object with the same ID of xxx, but gets another reference B, distinct from A. so far i'm building my software on the assumption that there should only ever be a single instance of any persisted object with a given ID (maybe that's stupid?).
my guess so far is that i could profit from a garbage-collection-related method like gc.get_reference_count( value )—checking that and knowing any count above 1 (since value is in the cache) means some closure is still holding on to value, so it should not be purged.
i haven't found anything useful in this direction. does the problem in general call for another solution?

About the life of transient attributes in entity of core data

I has a question that.
I need a runtime attribute in MyEntity, it is changed very offen.
And there are many MyEntity in core data.(such as 1000,0000);
I know that the transient attribute wont be saved in the disk, so these 1000,0000 MyEntities must be in memory all the time? but there are so many MyEntites,
the memory is large enough to keep 1000,0000 MyEntities?
If you need to change values on a large number of objects, those objects must exist. This is true whether or not you're using Core Data.
With Core Data there are various options for keeping memory under control-- getting rid of individual objects by re-faulting them, or getting rid of all managed objects by resetting the managed object context, for example. But it's hard to tell what you're really trying to do here and why any of this is needed. If this attribute is transient, why would you want to change it on an object that you're not using, that's not even loaded into memory? You could load the object, change the transient value, and then get rid of the object to keep memory use under control. But since the transient attribute doesn't get saved, what's the point? When you're done, nothing has changed. Why not just skip the update completely?

Are Views in Domino Thread Safe?

As I understand it, if I open a view from a database using db.getView() there's no point in doing this multiple times from different threads.
But suppose I have multiple threads searching the View using getAllDocumentsByKey() Is it safe to do so and iterate over the DocumentCollections in parallel?
Also, Document.recycle() messes with the DocumentCollection, will this mess with each other if two threads search for the same value and have the same results in their collection?
Note: I'm just starting to research this in depth, but thought it'd be a good thing to have documented here, and maybe I'll get lucky and someone will have the answer.
The Domino Java API doesn't really like sharing objects across threads. If you recycle() one view in one thread, it will delete the backend JNI references for all objects that referenced that view.
So you will find your other threads are then broken.
Bob Balaban did a really good series of articles on how the Java API works and recycling. Here is a link to part of it.
http://www.bobzblog.com/tuxedoguy.nsf/dx/geek-o-terica-5-taking-out-the-garbage-java?opendocument&comments
Each thread will have its own copy of a DocumentCollection object returned by the getAllDocumentsByKey() method, so there won't be any threading issues. The recycle() method will free up memory on your object, not the Document itself, so again there wouldn't be any threading issues either.
Probably the most likely issue you'll have is if you delete a document in the collection in one thread, and then later try to access the document in another. You'll get a "document has been deleted" error. You'll have to prepare for those types of errors and handle them gracefully.

Resources