How to see memory usage of ALL scoped variables - xpages

Is there any chance to see, how much memory is allocated by all scoped variables?

The best way is to use yourKit java profiler.
You can install the agent on the Domino server and then profile the JVM. This will give you the ability to look and see what's going on during run time, see execution times, and see what the number of classes and instances loaded and how much memory they are consuming.

Not exactly what you asked for, but it may help: type tell http xsp heapdump command on console. It will create heapdump file in binaries directory of Domino. Open that file in Heap Analyzer tool (http://www-01.ibm.com/support/docview.wss?uid=swg21190608), also available in IBM Support Assistant (http://www-01.ibm.com/software/support/isa/).

Related

Memory profiling of a multithreaded haskell program

I have a Snap web app which serves some JS files and 1-pixel images (it's main task is to rather work fast then serve huge html/media content). There are several servers behind HAProxy.
I upgraded it from GHC 7.6 to 7.8, also upgrading some libs. After upgrade, app started leaking little by little (on all servers), ending in OOM every 15 minutes on 8GB-RAM machines (and much longer on 16Gb) and restarting afterwards.
The problem is, if I compile app for profiling and run app for some time, I can't see any memory leaks anymore. It just consumes 1 CPU and works in constant small memory.
So I wanted to ask for some general advices on how to find such a bottleneck, if running under profiling doesn't help much.
UPDATE: I noticed after playing with an app that if I remove -A100M runtime option it doen't OOM that fast, but with default value HAProxy's "sessions" gets to it's limit (so, basically it chokes). I'm playing with different RTS options now, hope some will help getting both, performance and long-lived memory consumption.
UPDATE 2: just for the record, I found that with -A30 rts option app, while being memory hungry, lives quite well. 8Gb machines OOM-kill app, but 16Gb one looks like this: http://i.imgur.com/3W9KpFS.png (green line is "RAM available", you can see deploy-procedure which restarted app on a graph). I'm happy with the result, but would be glad to know any techniques to profile memory of multi-threaded app anyway.
UPDATE 3: I'm voting to close this question as "too broad". In general, I see that if such generic set of tools that'll let you profile memory easier would exist, they'd definitely be documented elsewhere on wiki etc.
I have never used it myself but maybe ticky-ticky profiling could help? It's supposed to be immune to the optimization changes caused by ordinary profiling, but at the cost of being harder to interpret.
Basically compile and link the relevant modules with the -ticky and -rtsopts flags, and run with +RTS -rfoo.ticky flag to get heaps of data in foo.ticky.

Server performance degrading over time (a short time) - Xpages

I have an xpages 9.0.1 server running multiple online form type apps. The server runs fine and performance response is quite good. Pages load fast, users are happy.
Over time (yet to determine how long), the server performance degrades and ultimately grinds to an almost stop.
Each night I am scheduling -c "tel http restart" and it is getting me out of trouble.
I am not sure what page is causing the problem as the degrading happens over a couple of days.
Most of our xpages are SSJS, all of our java (of which there is not much) is appropriately recycled.
It does not seem to be effecting RAM memory - it bounces up a bit and down a bit but well with limits. There is no correlation with the increased response times to more memory used.
So where do I look and what tools can I use to isolate the problem. We are more Dev than Admin.
Cheers
Damien
There are profiling tools available that may help pinpoint which application is causing problems. From OpenNTF, XPages Toolbox is specifically for XPages and was contributed by Philippe Riand, who at the time was Chief Architect for XPages http://www.openntf.org/main.nsf/project.xsp?r=project/XPages%20Toolbox.
There are more heavy-duty, Java-specific tools like YourKit available.
Chapter 20 of Mastering XPages second edition specifically covers performance and there is also a lot of information in XPages Portable Command Guide about performance tuning.
If performance is degrading over time, it could be session timeout. By default, that's 30 minutes. You can extend it, but the danger then is that a browser cannot tell the server it's closing the session when the user closes the browser. So those sessions hang around. Equally if there are very long-running tasks, they would hang around until they complete and the session would then still be active until the timeout.
Are you recycling your SSJS?
If you go into the server tasks of Domino Admin what do you see the CPU usage of the HTTP task doing. Also what is the memory usage of your nHTTP task? You may want to watch that.
Have you gone into the console to see if you can see if there us anything that looks bad?
If you can't pinpoint a problem you may want to think of putting some of your pages on a different server to determine if which app if not all is causing this.
Are you using scoped variables that are session or application scope? Application scope variables stay alive so if you are creating those and have some sort of issue where you end up creating a bunch that can affect memory.
Also there is a server and application setting for when the XPages stay in memory. The suggested setting to Keep only the current page in memory and save the others to disk. This is in the XSP properties.

Profiling Node.js web application on Linux

Which would be the best option to profile a Node.js application on linux? I tried https://github.com/c4milo/node-webkit-agent and https://github.com/baryshev/look (this is based on nodetime), but they both seem pretty experimental. What surprises me the most is that the results reported by these tools are different.
The major disadvantages for look are that the heap snapshots aren't very relevant and you can't CPU profile for more than 1 minute.
With node-webkit-agent the Chrome browser is running out of memory.
I'm doing profiling while sending requests using JMeter to my web application.
Not sure if you're willing to use an online service instead of a module, but you could give http://nodefly.com/ a try, it's free and has worked quite good for me.

nodejs memory profiling

Need to profile node process. i've some memory leaks in production, after some days of running node process.
i've tried node-inspector + v8, but it doesn't work, in new version of node-inspector there is no Profile tab. and in old version when i start profiling error is fired and debugging stopped.
i've also tried nodetime.com, but it doesn't show what i need, also it takes too much memory, it's not for production.
i've also tried dtrace (http://blog.nodejs.org/2012/04/25/profiling-node-js/) but it doesn't give me necessary information.
so what information i need for profiling memory:
get live instances, instances count, size in memory, instance types
do u know how to get that information?
You can try to use look module. It based on nodetime but works locally.
I've found node-memwatch useful.
The downside is you have to embed it in your application and have a bit of code for it, but it's useful for checking the heap at various places to see how much it changed after you did something.

How to use IntelliTrace Standalone Collector to detect memory leaks in production .Net applications?

Visual Studio 2012RC has the ability to use externally collected trace files of IIS app pool data collected by the IntellitTrace Standalone Collector. I know that in my production app there is some kind of memory leak that is apparent after a few hours of monitoring.
I now have my large iTrace file ready to plug into VS2012, but would like to know how to find the questionable object.
I am also in the process of using the Debugger tools and following these instructions. However, run into an error indicating that the appropriate CLR files (or something like that) are not loaded when trying to do the .load SOS or any other command.
I was hoping to see a similar address list and consumed memory in the IntelliTrace analyzer - is this possible?
Some assistance would be appreciated.
Intellitrace only profiles events and method calls. You won't get information on individual objects or memory leaks because it's not tracking memory. There's also no event provided for object creation/destruction so you can't infer that in any case.
To track memory you will have to use the profiling tools on your app, though don't attach them to your production server! Use a test environment for it and see if you can replicate the problem.

Resources