I have found answers to disable concurrent garbage collection for an application that uses config file. But my C# app is a class library built for ArcGIS, and don't have a config file.
How to disable the concurrent gc in this case?
Thanks!
The decision over which GC flavor to use is up to the application (host) and not external DLLs.
Related
Visual Studio 2012RC has the ability to use externally collected trace files of IIS app pool data collected by the IntellitTrace Standalone Collector. I know that in my production app there is some kind of memory leak that is apparent after a few hours of monitoring.
I now have my large iTrace file ready to plug into VS2012, but would like to know how to find the questionable object.
I am also in the process of using the Debugger tools and following these instructions. However, run into an error indicating that the appropriate CLR files (or something like that) are not loaded when trying to do the .load SOS or any other command.
I was hoping to see a similar address list and consumed memory in the IntelliTrace analyzer - is this possible?
Some assistance would be appreciated.
Intellitrace only profiles events and method calls. You won't get information on individual objects or memory leaks because it's not tracking memory. There's also no event provided for object creation/destruction so you can't infer that in any case.
To track memory you will have to use the profiling tools on your app, though don't attach them to your production server! Use a test environment for it and see if you can replicate the problem.
Migrated an application to WebSphere v8 from v6 and started getting memory leaks. The primary suspect is org.apache.axis2. It looks like each time the application calls a web service, an object called ServiceClient is created by WAS8 and stored in something that's called ClientConfigurationContextStore and then never garbage collected. Has anybody had a similar issue?
Fixed the problem by forcing original axis 1.4 over supplied soap implementation. This was done by placing two files in WEB-INF/services of the application. First file is called javax.xml.soap.MessageFactory and contains 'org.apache.axis.soap.MessageFactoryImpl' and the second is called javax.xml.soap.SOAPConnectionFactory and contains 'org.apache.axis.soap.SOAPConnectionFactoryImpl'. So now in the code this: javax.xml.soap.SOAPConnectionFactory.newInstance() returns a org.apache.axis stuff while before it was returning com.ibm.ws.webservices stuff. No memory leaks anymore.
If you don't have the problem in WebSphere v6, it's possible it is a leak in v8 itself. But it's also possible that v8 is being more strict about something that v6 was letting you get away with.
Have you checked that you're reusing all the Axis2 client objects that you can, rather than recreating ones on every call that you don't need to recreate? I recall us having some leakage in Axis2 client code under WAS v6.1 and realizing that we were recreating objects that we could be reusing instead.
In one of our projects, we used Axis2 1.6.2 as service client. Application server was WebSphere 7 and in test environment it got out of memory from time to time. When i examined heap dump AxisConfiguration class had lots of AxisService class instances. I was instantiating ServiceClient for every request and i saw that sometimes garbage collection worked late to finalize this object. So we used ServiceClient singleton and that solved our problem.
Is there any chance to see, how much memory is allocated by all scoped variables?
The best way is to use yourKit java profiler.
You can install the agent on the Domino server and then profile the JVM. This will give you the ability to look and see what's going on during run time, see execution times, and see what the number of classes and instances loaded and how much memory they are consuming.
Not exactly what you asked for, but it may help: type tell http xsp heapdump command on console. It will create heapdump file in binaries directory of Domino. Open that file in Heap Analyzer tool (http://www-01.ibm.com/support/docview.wss?uid=swg21190608), also available in IBM Support Assistant (http://www-01.ibm.com/software/support/isa/).
Background...
I have to build a new (asp.net mvc) app that uses an existing class library that is complex and can't be rewritten at this stage. The main problem is that this class library has a huge initialisation hit - it takes up to 10 mins to load all its data into memory. This is fine for production environment where it performs fast after IIS has started up. However, for development this is a nightmare because every time you build the solution and start it up in a browser, it takes ages.
Possible Solution?
So, the idea was that Castle Windsor or IOC lifestyle can be used to hold this in memory so that only recycling the application pool will force an expensive reload. I remember having a problem before when Windsor was keeping code in memory so even after changing it and recompiling, IIS still had the old code running - in this scenario it was a problem, but in my new scenario this is exactly what I'd like.
Anyone know how this can be done? I have tried with a dummy project using Singleton lifestyle but after changing the mvc project it still reloads the class library.
If the data does serialize then you could store in a cache that will keep it's state when you recompile. For example, memcached runs as a separate process. You could change the bin or restart the dev server process and the cache will keep it's state. There's a provider for accessing memcacheD on codeplex.
Maybe you could serialize the contents of the loaded library and save it binary form on the disk. This could potentially speed up the load. It's a crazy idea, but then again, having a class library that takes 10 minutes to load is crazy, too.
are there any performance limitations using IBM's asynchbeans?
my apps jvm core dumps are showing numerous occurences of orphaned threads. Im currently using native jdk unmanaged threads. Is it worth changing over to managed threads?
In my perspective asynchbeans are a workaround to create threads inside Websphere J2EE server. So far so good, websphere lets you create pool of "worker" threads, controlling this way the maximum number of threads, typical J2EE scalability concern.
I had some problems using asynchbeans inside websphere on "unmanaged" threads (hacked callbacks from JMS Listener via the "outlawed" setMessageListener). I was "asking for it" not using MDBs in the first place, but I have requisites that do not feet MDB way.