how to analyze memory leaks for "azure web apps" (PaaS) - azure

I am looking to analyze memory leaks for the web app deployed in azure.
Referring to following urls
https://blogs.msdn.microsoft.com/kaushal/2017/05/04/azure-app-service-manually-collect-memory-dumps/
https://blogs.msdn.microsoft.com/kaushal/2017/05/04/azure-app-service-manually-collect-memory-dumps/
we were able to extract memory dump and analyze them. but since we were not able to inject the LeakTrack dll / enable memory leaks tracking when collecting the dump, we are getting message that leak analysis was not performed due to not injecting the dll on performing memory analysis.
please suggest how to find out memory leakages from analyzing the dump in this scenario.

As you said, DebugDiag currently can't create reflected process dumps, and ProcDump doesn't have a way to inject the LeakTrack dll to track allocations. So, we could get around by working with both tools.
We can simply go to the Processes tab in DebugDiag, right click the process, and chose "Start Monitory for Leaks."
We can do that by scripting DebugDiag and ProcDump to do the individual tasks we've set out for them.
Once we have the PID of the troubled process, we can use a script to inject the LeakTrack dll into the process. With the PID known and the script created, we can launch DebugDiag from a command line.
Such as:
C:\PROGRA~1\DEBUGD~1\DbgHost.exe -script "your LeakTrack dll path" -attach your PID
For more detail, you could refer to this article.
Here is also the reference case.

Related

What is the proper tools and technics to analyze a core dump file in Linux

I'm not asking how to find the cause of the crash. Actually there is no crash at all. I can't exclude the possibility of memory leaking but the executable passed Valgrind analysis in the stress testing. However, when it's running in the cloud, with much load, it gradually consumed much memory. Devop had to use kill -6 pid to kill the process and generated a core dump file, then restarted it. With that core dump, what are the good tools and technics to help you locate which part of the code contributed to the very high memory consumption? Thanks!

How to diagnose a memory leak in an Azure WebJob

I suspect that I may have a memory leak in a WebJob, but I'm not certain how to definitively prove that I do. I suspect that I can find the information by going to the /processExplorer in the Kudu management console, start a profile, and download the results. However, I am not entirely sure if this is the route to go or what I should do with the file once I get it.
Any suggestions would be appreciated.
I can find the information by going to the /processExplorer in the Kudu management console, start a profile, and download the results
After you get the .diagsession file, you could open it with Visual Studio. You will see the CPU usage trend, but memory data is not include in this file. To easily identify whether there is a memory leak, Steps below are for your reference.
Refresh the Process Explorer on kudu manually and timely(For example once per 30s).
After you refresh the Process Explorer, you need to record the private memory and virtual memory which will be used for diagnose memory leak. By click the Properties button after the Process name, you will see the private memory and virtual memory of current Process.
After you have finished recording enough data, you need to compare the grow speed of virtual memory and private memory. If both virtual memory and private memory grow fast or virtual memory grows faster than private memory, it means there is a memory leak.
If you need more information of memory leak, you could download the memory dump file from Process Properties page and view the detail information of it using WinDbg. You also could analyze the dump file online using Diagnostics as a Service for Azure Web Sites. For more information about how to use it. Link below is for your reference.
DaaS – Diagnostics as a Service for Azure Web Sites

Request : Analyse GC Offline using Tool

For some strong reason, I do NOT have access to our JVM based Web - App servers when running LIVE Production and only way is the task of Monitoring the Activity Offline Only.
Hence I or Our-Moron-Team cannot Monitor the JVM based GC for any irregular memory usage.
Hence I ask the Experts is there any way by using JRE based Settings to setup during initial startup.
This settings have to constantly write to log file on hourly basis.
This Log file could be analyzed Offline using Tool Visual JVM and easily get to know the Reason of Crash / irregularity behavior by the charts provided.
Can some body help me with the JVM settings.
with regards
karthik
Garbage Collection activity
You will need to activate the GC logs using the following JVM options : -Xloggc:/path/to/logfile/gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps.
Then you can use tools such as GCViewer (free, open-source), HPJmeter (free) or JClarity Censum (commercial) to analyse the logfile afterwards.
Thread Dumps
You can use VisualVM using the TDA (Thread Dump Analyzer) plugin. TDA is also available as a standalone application if you want to visualize Thread Dumps afterwards.
Heap Dumps
You can use jhat (standard tool) of Eclipse Memory Analyzer to visualize a memory dump.
What about Memory Leaks
If you have a long-running GC log, you can give it to Censum that will tell you if your application suffer from a memory leak. Once you have this first information, you can take some snapshot and analyze them using Eclipse MAT or NetBeans Profiler to find out which objects are leaking.

How to use IntelliTrace Standalone Collector to detect memory leaks in production .Net applications?

Visual Studio 2012RC has the ability to use externally collected trace files of IIS app pool data collected by the IntellitTrace Standalone Collector. I know that in my production app there is some kind of memory leak that is apparent after a few hours of monitoring.
I now have my large iTrace file ready to plug into VS2012, but would like to know how to find the questionable object.
I am also in the process of using the Debugger tools and following these instructions. However, run into an error indicating that the appropriate CLR files (or something like that) are not loaded when trying to do the .load SOS or any other command.
I was hoping to see a similar address list and consumed memory in the IntelliTrace analyzer - is this possible?
Some assistance would be appreciated.
Intellitrace only profiles events and method calls. You won't get information on individual objects or memory leaks because it's not tracking memory. There's also no event provided for object creation/destruction so you can't infer that in any case.
To track memory you will have to use the profiling tools on your app, though don't attach them to your production server! Use a test environment for it and see if you can replicate the problem.

How to see memory usage of ALL scoped variables

Is there any chance to see, how much memory is allocated by all scoped variables?
The best way is to use yourKit java profiler.
You can install the agent on the Domino server and then profile the JVM. This will give you the ability to look and see what's going on during run time, see execution times, and see what the number of classes and instances loaded and how much memory they are consuming.
Not exactly what you asked for, but it may help: type tell http xsp heapdump command on console. It will create heapdump file in binaries directory of Domino. Open that file in Heap Analyzer tool (http://www-01.ibm.com/support/docview.wss?uid=swg21190608), also available in IBM Support Assistant (http://www-01.ibm.com/software/support/isa/).

Resources