I've got a c# app that uses a large amount of memory. I'd like to track down what objects are using the most memory so that I can optimize things a little more. Is there a tool that can help out with this? Something that would allow me to know what objects/variables/etc are using ram and how much?
Thanks in advance.
After looking at what I did with VS2010, you may have the Analyze menu. If so, then you can use this to get some information that may help.
But, as this question pointed out, it is slow, and generates huge files.
VS2010 .NET Memory Analysis - extremely slow
So, you will want to look at the various options and try the one that will best help you find just what you want.
Related
When developing XPages applications it seems to have become very popular to mainly use Java methods and beans instead of server-side JavaScript (SSJS). SSJS of course takes longer to execute because the code has to be evaluated at runtime. However, can anyone provide information about the QUANTITATIVE gain in performance when using Java? Are there any benchmarks for how much the execution times differ, for example depending on the length of the SSJS code or the functions used?
You have to use your own benchmarks. The increase in time might not be measurable. It is more around capabilities and your development process. Switching from SSJS to Java an expecting an instant increase in performance most likely won't happen.
Unless of course Java allows you to code things differently. So most of the decisions are based on capabilities, not speed. You are most welcome to run some tests and share the insights. What you can expect e.g. opening a document in SSJS vs. Java: the difference should be in the space of a rounding error, since most of the time is needed for the C call below.
SSJS and Java run at almost the same speed after the SSJS has been evaluated, so you have some onramp time and similar speed thereafter.
I agree about the performance gain being negligible. I will chime in to say this. Right now I am trying to learn to support an existing XPages application written without using any java, and entirely in SSJS. There is code here, there, and everywhere. It is very hard to follow.
Depending on your environment, you should consider programmer productivity when considering how to build your applications, especially when you know both. Productivity for you, and those coming after you.
Stephan's answer is right on point: though Java as a language IS faster (you'd probably see performance gains proportional to the complexity of the block of code more than the number of operations running), the primary benefit is program structure. My experience has been that using Java extensively makes my code much cleaner, easier to debug, and MUCH easier to understand after coming back to it months later.
One of the nice side effects of this structural change does happen to be performance, but not because of anything inherent to Java: by focusing on classes and getters/setters, it makes it easier to really pay attention to expensive operations and caching. While you CAN cache your data excellently in SSJS using the various scopes, it's easier for your brain - both now and after you've forgotten what you did next year - to think about that sort of thing in Java.
Personally, even if Java executed more slowly than SSJS but the programming models in XPages were the same as they are now, I would still use Java primarily.
You are asking about the pure processing performance - the speed of the computer running the code. And as Stephen stated Java is going to be a "little" faster because it doesn't need to do the extra step of the string parsing the code first. Ok in the big picture that's really not a big deal.
I think the real "performance" gain that you get by moving to Java in XPages is cleaner code with more capabilities. Yes you're putting a lot of code in SSJS Libraries. And that can work really well. But I assume those are more individual functions that you use over and over rather then true objects that you can put in memory and they're they're when you need them. When you get your core business logic inside Java Objects in my experience the speed of development goes significantly faster. It's not even close.
Take the Domino document object. That's a rather handy object. Imagine if it wasn't an "object" but simply a library of 50 or so functions that you need to first paste into each database. Doesn't seem right. And of course in the Domino API it's not just the domino object. There's like 60 or so different objects!
Typical XPages with Java development moves much - not all - but much of the code away from the .xsp page and into Java Classes which are very similar to custom classes on LotusScript. The not only creates separation between frontend code - making the .xsp pages easier to work with - but puts the business logic inside Java which is similar to working to the the Domino backend objects. So then the backend gets easier to work with, maintain and add onto.
And that's where a big part of the development speed improvements come from.
Getting back to your original question, which is about computer speed. I would suggest that it's much easier to cache frequently used data via Java Objects and managed beans then it is with SSJS. Not having to hit the disc as much would be a real speed advantage.
I would recommend you to consider performance gain in a wider context.
performance gain in quicker running?
performance gain in typing?
performance gain in not making mistakes because of the editor?
performance gain of using templating in the Java editor?
performance gain in better reusability, eventually to server-wide plugins?
performance gain in being comfortable building your own classes to hold complex objects?
performance gain in easier debugging?
performance gain in being comfortable with Validators, Converters, Phase Listeners, VariableResolvers etc?
performance gain in being comfortable looking at Extension Libraries to investigate or extend?
performance gain of being able to find answers more easily on StackOverflow or Google because you're using a standard language vs a proprietary language?
performance gain in using third party Java code like Apache Commons, Apache POI etc?
To be honest, when you have got that far and understand how much code is run during a page load or partial request, performance gain in runtime of Java vs SSJS is minimal compared to something like using loaded where possible instead of rendered. The gains of Java over SSJS are much wider, and I have not even mentioned the gains in professional development.
My answer is way too long for a stackOverflow answer, so as promised, here is a link to my blog post about this issue. Basically it has nothing to do with performance, but with Maintainability, Readability, Usability
I have been working to eliminate memory leaks in our mono touch and learned a lot in last couple of days e.g. that it is almost always some event that needs to be unhook before garbage collecting is succeeded :)
But now I have been playing around with the profiller tool and I can see that most of the memory is used by strings (or it seems), please see the following screendump:
But as you can see some memory is also used by mono. I have been working our viewmodels and views and most of these are garbage collected correct. If I look into the strings they sometimes are referenced by and I have no clue what to do with this info.
Do you guys have any suggestion if I can reduce the amount of memory used by strings :) I have tried to find any tutorial or similar that might enlight what these numbers means, but with no luck. Any help is appreciated.
Some answers from my personal experience:
For tutorial I only really know about http://docs.xamarin.com/ios/Guides/Deployment%252c_Testing%252c_and_Metrics/Monotouch_Profiler
I find the 'Inverse References' option one of the most useful features - what matter is not that you have a lot of strings, but rather what owns those strings.
I find the best way of hunting these bugs is to reproduce them in a simple test harness and/or test sequence - as apps get bigger and I use more and more components - MvvmCross, JSON.Net, SQLite-net, etc - in more and more asynchronous ways, then I find I need to cut down on the number of these components to identify the leaks.
Once you have a simple test harness, the filter option in the HeapShot helps - as it lets you focus on classes which are in known namespaces.
Once you have a simple test harness, then comparing two HeapShot's can also help - which actions in your test UI cause what increases between HeapShots?
Differences are what matter - some libraries deliberately cache things in memory - e.g. some of the PropertyInfo's in your HeapShot images might be deliberately cached by one of the libraries in order to improve deserialisation speed.
For easier cross-referencing, adding links to linked questions:
Garbage collecting issue with Custom viewbinding in mono touch and mvvmcross
when to release objects in mono touch / mvvmcross
MVVMCross - SqlBits Memory Leak
Helping the GC in mono droid using mvvmCross
In addition to Stuart's great answer, I'd like to stress that you should profile on device. Execution on device is tweaked for runtime performance, while the simulator is tweaked for build performance (so that the edit-debug cycle is as fast as possible). Among other things it means that in the simulator more memory is used at runtime than in on device.
Is there any Java Static Code Analyzer that can detect code that could possibly cause a memory leak ? I understand that JVM profilers are used for this purpose, but that does not helps us to put the checkpoint during development itself.
Educating developers about best practice is on one side, but how do i put an automated process as a checkpoint here ?
Any thoughts or recommendations are welcome.
From my researcher point of view, the closest thing I can think of is COSTA: http://costa.ls.fi.upm.es/
This is a tool that, using static analysis, computes the amount of memory used by programs/methods. Maybe the developers (feel free to ask, they are nice people) can tell you if COSTA is a good choice for your needs (or maybe they know something better).
Or tell me if I'm misunderstanding what I'm seeing, but check out this link.
http://code.jquery.com/mobile/1.0a1/jquery.mobile-1.0a1.min.js
See how all the variables are like a,b,c and all the code is compacted into no space at all? Makes it completely unreadable, incomprehensible. I see it a lot. The google search page is the same way if you view the code.
I'm guessing this is an obsfuscation tactic, and I'm wondering how it's done. Obviously they don't really have code that looks like that when they're writing it-- it must be put through some sort of transform. How is this done?
It's not primarily an obfuscation tactic. JQuery's full readable source is publicly available.
That is minified code. The main purpose is to make it load faster, and transfer over the network faster.
There are multiple minify tools out there. I use the Google Closure Compiler which does much more than minify, it also does some optimization and does a nice job of finding coding errors, but also produces very compact code.
Minified or compiled js is hard to read and so might serve you as an obfuscation approach. Most people probably aren't using it for that reason.
The real value of this approach ( minification ) is to reduce the size of the code, consequently decreasing the download time and so making user pages available much more quickly.
There are quite a few tools designed for this purpose as a search of Google will reveal.
The security outcomes are minimal because there are also tools designed to prettify JavaScript code which will turn minified code back into something readable. As everyone knows, security through obscurity is very nearly equivalent to no security at all. If you are putting code where it is publically available, people can read it. The trick is to ensure that this doesn't matter.
i m a new programmer in .Net Profiler and Visual C++,
i read many forums and weblogs for .Net Profiling and i have this question.
my Profiler application must be a unmanaged code or i can use some class in .Net in my application? and what type of project i must create ATL with MFC Or ..?
another question is how can i register my profiler dll file that every application of my computer use this profiler?
where the best situation for re-write IL of a method (Enter method of profiler or JITCompilationStarted)?
how can i get input variables of old method and send to new method?
how can i change IL of a property or whole of a class?
i want chane all datetime format of my dlls, and i think that i must search name of those method in JITCompilationStarted and then rewrite that methods,Do u have any better solution? thanks alot.
It is good that you want to try this. I would suggest that you be aware of some old ideas about profiling that are less than helpful, and try to improve on them.
I would suggest that the primary focus should be on lines of code, not functions, and that the most useful statistic to get is, for each line, the percent of time it is responsible for (i.e. on the call stack). (The advantage of getting percentage is that you don't have to care how long things take, how many times they are invoked, or whether they are competing with other processes.)
I think the best way to get that information is by means of stack samples taken at random wall-clock times. Don't exclude samples just because they occur when the program is blocked, unless you want to be blind to needless I/O. A good approach is to let the user turn sampling on/off, so you don't take a lot of samples while waiting for user input.
An example of a good profiler that takes this approach is RotateRight/Zoom. Good luck.