I have read this suggestion about recycle of Domino objects:
What is the best way to recycle Domino objects in Java Beans
What is best practice if I have a datasource named document and in a function that is called several times this code exists:
var doc=document.getDocument(true)
and doing stuff to the backend document.
Before I exit the function should I recycle doc or is my backend document to the datasource recycled then ?
This is an excellent question, because this is one of the only exceptions to the "recycle everything" principle (two other notable examples are that you should never recycle the current session or database). It's a bad idea to recycle the back end document for a data source, because the JSF lifecycle gets the same handle, and you'd be recycling it out from under Domino. The data source takes care of this for us, so there's no need to recycle it manually. On the other hand, if you get a handle on specific items (i.e. doc.getFirstItem("someFieldName"), or item values that are dates, you should recycle those objects, just not the document itself.
By far the most important scenario where it's crucial to recycle Java and SSJS objects is in view iteration, because every time you advance to the next entry or document, you're leaking a handle if you skip the recycle. In most other cases, recycling is still advisable, but closer to being optional, because it takes a long time for other operations to leak enough to cause problems. But if you're iterating a very large view, you can easily run out of handles in a single iteration if you forget to recycle.
One parting thought, however: I rarely see a situation where getting a handle on the back end document of a data source is the best approach, so I'd recommend revisiting your code to ensure that it's even necessary to obtain this handle to begin with. For instance, instead of document.getDocument(true).getItemValueString("someFieldName"), just call document.getValue("someFieldName"). The value returned should be identical, but it will run more efficiently, and you're not touching the back end document, so recycling isn't an issue. And it's less typing for every item access, which certainly adds up over time. Similarly, instead of document.getDocument(true).replaceItemValue("someFieldName", "newValue"), substitute document.setValue("someFieldName", "newValue").
Related
I was wondering if anyone has done any perf tests around the effect calling EF Cores SaveChangesAsync() has on performance if there are no changes to be saved.
Essentially I am assuming it's basically nothing and therefore isn't a big deal to call it "just in case"?
(I am trying to do something with tracking user activity in middleware in asp net core and essentially on the way out I want to make sure save changes was called to persist the activity to the database. There is a chance that it has already been called on the context depending on the operation of the user and if that's the case I don't want to incur the cost of a second operation when the activity could be persisted as part of the normal transaction/round trip)
As you can see in implementation if there are no changes, nothing will be done. As far it has impact to performance, I don't know. But of course calling SaveChanges or SaveChangesAsync without any changes will have a performance impact in relation to don't call them.
That's the same behavior like EF6 has too.
As I understand it, if I open a view from a database using db.getView() there's no point in doing this multiple times from different threads.
But suppose I have multiple threads searching the View using getAllDocumentsByKey() Is it safe to do so and iterate over the DocumentCollections in parallel?
Also, Document.recycle() messes with the DocumentCollection, will this mess with each other if two threads search for the same value and have the same results in their collection?
Note: I'm just starting to research this in depth, but thought it'd be a good thing to have documented here, and maybe I'll get lucky and someone will have the answer.
The Domino Java API doesn't really like sharing objects across threads. If you recycle() one view in one thread, it will delete the backend JNI references for all objects that referenced that view.
So you will find your other threads are then broken.
Bob Balaban did a really good series of articles on how the Java API works and recycling. Here is a link to part of it.
http://www.bobzblog.com/tuxedoguy.nsf/dx/geek-o-terica-5-taking-out-the-garbage-java?opendocument&comments
Each thread will have its own copy of a DocumentCollection object returned by the getAllDocumentsByKey() method, so there won't be any threading issues. The recycle() method will free up memory on your object, not the Document itself, so again there wouldn't be any threading issues either.
Probably the most likely issue you'll have is if you delete a document in the collection in one thread, and then later try to access the document in another. You'll get a "document has been deleted" error. You'll have to prepare for those types of errors and handle them gracefully.
I'm returning A LOT (500k+) documents from a MongoDB collection in Node.js. It's not for display on a website, but rather for data some number crunching. If I grab ALL of those documents, the system freezes. Is there a better way to grab it all?
I'm thinking pagination might work?
Edit: This is already outside the main node.js server event loop, so "the system freezes" does not mean "incoming requests are not being processed"
After learning more about your situation, I have some ideas:
Do as much as you can in a Map/Reduce function in Mongo - perhaps if you throw less data at Node that might be the solution.
Perhaps this much data is eating all your memory on your system. Your "freeze" could be V8 stopping the system to do a garbage collection (see this SO question). You could Use V8 flag --trace-gc to log GCs & prove this hypothesis. (thanks to another SO answer about V8 and Garbage collection
Pagination, like you suggested may help. Perhaps even splitting up your data even further into worker queues (create one worker task with references to records 1-10, another with references to records 11-20, etc). Depending on your calculation
Perhaps pre-processing your data - ie: somehow returning much smaller data for each record. Or not using an ORM for this particular calculation, if you're using one now. Making sure each record has only the data you need in it means less data to transfer and less memory your app needs.
I would put your big fetch+process task on a worker queue, background process, or forking mechanism (there are a lot of different options here).
That way you do your calculations outside of your main event loop and keep that free to process other requests. While you should be doing your Mongo lookup in a callback, the calculations themselves may take up time, thus "freezing" node - you're not giving it a break to process other requests.
Since you don't need them all at the same time (that's what I've deduced from you asking about pagination), perhaps it's better to separate those 500k stuff into smaller chunks to be processed at the nextTick?
You could also use something like Kue to queue the chunks and process them later (thus not everything in the same time).
I have a Silverlight app where I've implemented the M-V-VM pattern so my actual UI elements (Views) are separated from the data (Models). Anyways, at one point after the user has gone and done some selections and possible other input, I'd like to asyncronously go though the model and scan it and compile a list of optiions that the user has changed (different from the default), and eventually update that on the UI as a summary, but that would be a final step.
My question is that if I use a background worker to do this, up until I actually want to do the UI updates, I just want to read current values in one of my models, I don't have to synchronize access to the model right? I'm not modifying data just reading current values...
There are Lists (ObservableCollections), so I will have to call methods of those collections like "_ABCCollection.GetSelectedItems()" but again I'm just reading, I'm not making changes. Since they are not primitives, will I have to synchronize access to them for just reads, or does that not matter?
I assume I'll have to sychronize my final step as it will cause PropertyChanged events to fire and eventually the Views will request the new data through the bindings...
Thanks in advance for any and all advice.
You are correct. You can read from your Model objects and ObservableCollections on a worker thread without having a cross-thread violation. Getting or setting the value of a property on a UI element (more specifically, an object that derives from DispatcherObject) must be done on the UI thread (more specifically, the thread on which the DispatcherObject subclass instance was created). For more info about this, see here.
We have a website running .NET 2.0 and have started using the ASP.Net HttpRuntime.Cache to store the results of frequent data lookups to cut down our database access.
Snippet:
lock (locker)
{
if (HttpRuntime.Cache[cacheKey] == null)
{
HttpRuntime.Cache.Insert(cacheKey, GetSomeDataToCache(), null, DateTime.Today.AddDays(1), Cache.NoSlidingExpiration);
}
return ((SomeData)HttpRuntime.Cache[cacheKey]).Copy();
}
We are pessimistically locking whenever we want to look at the cache. However, I've seen various blogs posted around the web suggesting you lock after you check the cache value instead, to not incur the overhead of the lock. That doesn't seem right as another thread may have written to the cache after the check.
So finally my question is what is the "right" way to do this? Are we even using the right thread synchronization object? I am aware of ReaderWriterLockSlim() but we're running .NET 2.0.
As far as I know the Cache object is thread safe so you wouldn't need the lock.
The Cache object in .NET is thread safe, so locking is not necessary. Reference: http://msdn.microsoft.com/en-us/library/system.web.caching.cache.aspx.
Your code is probably making you think you'll have that item cached for 1 day and your last line will always give that data to you, but that's not the case. As others said, the cache operations are synchronized so you shouldn't lock at that point.
Take a look here for the proper way of doing it.
Thread safe. Does it mean all other processes are waiting for ever for your code to finish?
Thread safe is you can be sure that the item you fetch will not be "cut in half" or partially demolished by an update to the cache at the same time you are reading the item.
item = cache.Get(key);
But anything you do after that - another thread can operate on the cache (or any other shared resource). If you want to do something to the cache based on your fetched item being null or not I would not be 100% sure it is not already fixed by a an other instance of your own code being a few CPU instructions ahead servicing another reader of the same page of your on line motor magazine.
You need some bad luck. The risk of other processes bothering about the same cache object, non atomic, in a few lines of code appart, is randomly small. But if it happens you will have a hard time figuring out why the image of the Chevy is sometimes the small suitcase from page two.