Performance drop due to NotesDocument.closeMIMEEntities() - xpages

After moving my XPages application from one Domino server to another (both version 9.0.1 FP4 and with similar hardware), the application's performance strongly dropped. Benchmarks revealed that the execution of
doc.closeMIMEEntities(false,"body")
which takes ~0.1ms on the old server, now on average takes >10ms on the new one. This difference wouldn't matter if it was only about a few documents, but when initializing the application I read more than 1000 documents and so the initialization time changes from less than 1sec to more than 10sec.
In the code, I use the line above to close the MIME entity without saving any changes after reading from it (NO writing). The function always returns true on both servers. Still it now takes more than 100x longer despite nothing has been changed in the entity.
The facts that both server computers have more or less the same hardware, and the replicas of my application contain the same design and data on both servers, let me believe that the problem has something to do with the settings of the Domino server.
Can anybody help me with this?
PS: I always use session.setConvertMime(false) before opening the NotesDocument, i.e. the conversion from MIME to RichText should not be what causes the problem.
PPS: The HTTPJVMMaxHeapSize is the same on both servers (1024M) and there are multiple 100Mb of free memory. I just mention this in case someone thinks the problem might be related to being out of memory.

The problem is related to the "ImportConvertHeaders bug" in Domino 9.0.1 FP4. It has already been solved with Interim Fix 1 (as pointed out by #KnutHerrmann here).
It turned out that the old Domino server had Interim Fix 1 installed, while the "new" one had not. After applying the fix to the new Domino server the performance is back to normal and everything works as expected.

Related

Couchdb view generation performance

How to avoid slow requests on frequently updated view in couchdb , when returned "up-to-date" is not important, what I am talking is probably caching , and wondering is there any out of the box solution without involving third party software like "nginx cache"
What I've tried is set compression to 0,
[{db_fragmentation, "0%"}, {view_fragmentation, "0%"}] yet the views sometimes take 30+ seconds to be available for the consumer.
Adding &update=false on the end of url seems to do the job
I am "relaxed" again

Netlogo 5.1 (and 5.05) Behavior Space Memory Leak

I have posted on this before, but thought I had tracked it down to the NW extension, however, memory leakage still occurs in the latest version. I found this thread, which discusses a similar issues, but attributes it to Behavior Space:
http://netlogo-users.18673.x6.nabble.com/Behaviorspace-Memory-Leak-td5003468.html
I have found the same symptoms. My model starts out at around 650mb, but over each run the private working set memory rises, to the point where it hits the 1024 limit. I have sufficient memory to raise this, but in reality it will only delay the onset. I am using the table output, as based on previous discussions this helps, and it does, but it only slows the rate of increase. However, eventually the memory usage rises to a point where the PC starts to struggle. I am clearing all data between runs so there should be no hangover. I noticed in the highlighted thread that they were going to run headless. I will try this, but I wondered if anyone else had noticed the issue? My other option is to break the BehSpc simulation into a few batches so the issues never arises, bit i would be nice to let the model run and walk away as it takes around 2 hours to go through.
Some possible next steps:
1) Isolate the exact conditions under which the problem does or not occur. Can you make it happen without involving the nw extension, or not? Does it still happen if you remove some of the code from your model? What if you keep removing code — when does the problem go away? What is the smallest code that still causes the problem? Almost any bug can be demonstrated with only a small amount of code — and finding that smallest demonstration is exactly what is needed in order to track down the cause and fix it.
2) Use standard memory profiling tools for the JVM to see what kind of objects are using the memory. This might provide some clues to possible causes.
In general, we are not receiving other bug reports from users along these lines. It's routine, and has been for many years now, for people to use BehaviorSpace (both headless and not) and do experiments that last for hours or even for days. So whatever it is you're experiencing almost certainly has a more specific cause -- mostly likely, in the nw extension -- that could be isolated.

Reloading Flash 17 times causes error #2046 and requires a browser restart

I am encountering some very strange behaviour with a Flex 4.1 app I am writing which gets in the way of testing. It seems that I can reload the app 16 times and then on the 17th, the loading process fails with
Error #2046: The loaded file did not have a valid signature
It seems to be consistently happening on the 17th reload on both Firefox 5.0 and Chrome 12. I am not sure if it's relevant, but I am running Flash Player v10.2.159.1 (also happens with 10.3.181.34) on Ubuntu 10.04. Happens with both regular and debugger versions of the player. When I run the app on Windows FF5, it doesn't seem to happen. Closing the current browser window does not seem to fix it. The only way around it is to completely close all browser windows and restart the browser. And then again after 16 successful loads, the 17th fails.
At this point I'm thinking of chalking it as a Linux Flash bug but I'd like to make sure and check if anyone knows if there's something I should be doing to prevent this.
The user from this post seems to have had the same problem but I guess he didn't notice the pattern I have.
Any help will be greatly appreciated.
Ruy
== UPDATE ==
I just realized that after my app starts throwing the 2046 error, trying to load any other Flash that uses signed RSLs also shows the 2046 error (e.g. this app), which means the problem is not specific to my app and most likely related to the Flash cache or something of the sort.
Disclosure: I am a Flash Player Developer at Adobe.
This is unlikely to get much attention as it is Linux only and kind of an edge case: Probably annoying during dev work but very few users will reload the same page more than 16 times. It might also be a browser issue. But it is probably us :) I will look at the jira tomorrow and see if I can bump it up a bit, but I'll be honest in that it is really an edge case and unlikely to get much love. If you want to increase your chances make sure to add the most simple .swf test case you can make to the bug. Also please double check if it still happens with the latest beta.
I also just took a look at the earlier bug reports and forum posts, you probably should post this as a Flash Player bug, not as Flex.
Long shot guess, but it sounds similar to a problem we had.... in the project properties - Flex Build Path - Framework Linkage - change to "merged into code". This fixes a problem very similar to what you are describing, though I wish I knew exactly what the cause is. Good luck!
tl;dr: No idea on the cause, posting random possibility in hope it might give someone else an idea or two for testing.
Considering that it seems to be an unresolved bug in Adobe issue tracker, its unlikely that you will get any definitive answer here. Considering it occurs on both Firefox & Chrome, let's rule out browser bugs and assume it is in either some common library (Flash) or OS API (Linux kernel implementation). A comment in one of the jira issues specifically mentions killing Flash process fixes it, so its a Flash issue and not OS bug.
The most interesting thing I can see here is your observation that it succeeds for exactly 16 times before failing to load. Time for some speculation here, from someone who's never worked on kernel or crypto dev:
With a 2048 bit RSA key and 32k cache for storing them, 16 keys would fit before adding another one fails - so one conjecture is that each time this file is loaded, Flash is caching the signed value (possibly a hashed version) for some reason - maybe to keep track of allowed & used security permissions etc.? If this entry is not removed, then once it is full all file loads will fail if caching the signature is part of checking it.
Things you can experiment with:
Reduce size of app to see if page can be reloaded more often (as suggested by stackfish)
Count number of signed RSLs used and if its a power/multiple of 2 (maybe others get the error after 32 page loads if they use half the no. of signed libs?)
Check if Linux Flash plugin has some option to increase credentials cache or something (or decrease it, just to see if it impacts the no. of loads - if so, could be related to the problem)
I expect that to actually find a solution, you'd have to dive into the library loading code and look at all constants related to loading signed libs that are 4, 16, or a multiple of 16 to see if they might be responsible - in short, unlikely to be soluble by others outside of Flash dev team imho :/
This behavior could be related to a memory leak caused either by the Flex implementation, or the browser plugin. Firefox is notorious for not cleaning up memory anyway and the footprint will continue to grow the longer you have the same browser window open.
If you reduce the size of your flex app to produce something very tiny, does the number of times you can reload the page go up?
Error #2046 on win vista, 64 bit machine wit 1000 mb ati radeon videocard
problem occurs only in msn video sofar
I meet a same problem when I use ppt on icourse163.org . when I open the course site, I can't see the ppt but I use chrome can do it .there are the same flash version(32.0.0.344),and Then ,I copy the tar.gz file that downloaded from adobe.
usr/* to /usr.I solved it.wish can help you .

SQl Server 2000 Execution: Statistics Missing

I have a sitauation in production where a procedure is taking different time in two different envionments, when I tried to run the execution plan some stastics are missing. When I clicked on those icon(which was in red color for some attention). Stsstics are missing in both server. But I am wondering after seeing a message. There was a field called number of executes which was 23 in slow server and 1 in fast server. Can someone please tell the importance of this.
Edit Fragmentation is not a problem because when I checked I found Reorganizing would only relocate 2% of pages , New server was created with merge replication. Please advice on "number of executes" in execution lan and how we can work to reduce this.
Edit: will re building of indexes make any performance improvement
SQL 2000 has had issues with statistics and some execution plans in the past, and you would have to add query hints in order to make sure the execution would happen the way you want it. For starters, make sure you are on SP4, and then apply the following patch:
http://support.microsoft.com/kb/936232
This patch, while states an issue with an illegal operation (it resolves crashing with 64 bit machines and SQL2000), it also resolves a few other execution plan issues. Though I would ultimately recommend upgrading to SQL 2008, which has seemed to resolve a number of statistic issues that we used to encounter.
Here is a link that explains in more detail the number of executes:
http://www.qdpma.com/CBO/ExecutionPlanCostModel.html

Should AspBufferLimit ever need to be increased from the default of 4 MB?

A fellow developer recently requested that the AspBufferLimit in IIS 6 be increased from the default value of 4 MB to around 200 MB for streaming larger ZIP files.
Having left the Classic ASP world some time ago, I was scratching my head as to why you'd want to buffer a BinaryWrite and simply suggested setting Response.Buffer = false. But is there any case where you'd really need to make it 50x the default size?
Obviously, memory consumption would be the biggest worry. Are there other concerns with changing this default setting?
Increasing the buffer like that is a supremely bad idea. You would allow every visitor to your site to use up to that amount of ram. If your BinaryWrite/Response.Buffer=false solution doesn't appease him, you could also suggest that he call Response.Flush() now and then. Either would be preferable to increasing the buffer size.
In fact, unless you have a very good reason you shouldn't even pass this through the asp processor. Write it to a special place on disk set aside for such things and redirect there instead.
One of the downsides of turning off the buffer (you could use Flush but I really don't get why you'd do that in this scenario) is that the Client doesn't learn what the Content length at the start of the download. Hence the browsers dialog at the other end is less meaningfull, it can't tell how much progress has been made.
A better (IMO) alternative is to write the desired content to a temporary file (perhaps using GUID for the file name) then sending a Redirect to the client pointing at this temporary file.
There are a number of reasons why this approach is better:-
The client gets good progress info in the save dialog or application receiving the data
Some applications can make good use of byte range fetches which only work well when the server is delivering "static" content.
The temporary file can be re-used to satisify requests from other clients
There are a number of downside though:-
If takes sometime to create the file content, writing to a temporary file can therefore leave some latency before data is received and increasing the download time.
If strong security is needed on the content having a static file lying around may be a concern although the use of a random GUID filename mitigates that somewhat
There is need for some housekeeping on old temporary files.

Resources