We are developing a windows store app with XAML and we would like to run profiler of instrumentation type and get results with function execution timings (How long each function takes to execute).
But when we try to add the root project to the target on a instrumentation type profiling, it says "Switch to Sampling mode". If we switch to sampling mode, all that I can see is inclusive samples and exclusive samples. How to get function execution time tracks?
Based on this documentation from Microsoft it lists it as not supported.
These profiling features and options are not supported when profiling Windows Store apps
Profiling managed and native code using the instrumentation method.
I haven't been able to find any alternative solutions either.
Related
Currently I am trying to debug a Linux .NET Core application under Linux.
The trouble is, it fails somewhere right in the beginning, and I cannot get where. Logging is impossible under current circumstances.
As far as I can see on the Internet, and (severely avoiding any kind of systematizedness and consequtiveness) on MSDN specifically, the only currently available options for Linux are:
debug remotely (would not do well in my case);
Rider EAP by Jetbrains (proprietary decision);
using lldb.
So, my questions are:
Is there any way to launch the .NET Core self-contained app (via the "dotnet Some.dll" command) in such a way that it instantly breaks (i.e. as if there was a breakpoint) at the entry point?
If not, how can one launch lldb for a .NET Core console application attached (since numerous examples and issues over the Internet all show attaching to the already-running .NET Core process)?
Once again, there is the dotnet-dump utility, which works with already-running processes as well - so that, even dumps are an unavailable ooption for processes that crash almost instantly. I expected there might have been ways to make it dump like (imaginary) "dotnet-dump collect SomeInvocation.dll" along with (actully existing) "dotnet-dump collect --process-id 1234". Is there such a way?
I know of the existence of nvvp and nvprof, of course, but for various reasons nvprof does not want to work with my app that involves lots of shared libraries. nvidia-smi can hook into the driver to find out what's running, but I cannot find a nice way to get nvprof to attach to a running process.
There is a flag --profile-all-processes which does actually give me a message "NVPROF is profiling process 12345", but nothing further prints out. I am using CUDA 8.
How can I get a detailed performance breakdown of my CUDA kernels in this situation?
As comments suggest, you simply have to make sure to start the CUDA profiler (now it's NSight Systems or NSight Compute, no longer nvprof) before the processes you want to profile. You could, for example, configure it to run on system startup.
Your inability to profile your application has nothing to do with it being an "app that involves lots of shared libraries" - the profiling tools profile such applications just fine.
I've been looking for the process attach solution too but found no existing tool.
A possible direction is to use lower CUDA API to build a tool or integrate to your tool. See cupti: https://docs.nvidia.com/cupti/r_main.html#r_dynamic_detach
Is it possible to profile only a shared library without looking the main program ?
For example, I developed a plugin and I would like to profile but with no need to profile the whole application. I just want to see the bottleneck of my plugin. (Of course, I would like to profile it while the main application is running and has loaded my plugin...)
I'm working on linux and I'm used to callgrind, but for the curiosity, I'm also interested by the possibilities on all systems, so I let the question general.
I'm interested in this because the main program is quite slow, and don't want to add the overhead of profiling on since I'm not interested by the main program performance here...
In Linux perf statistical profiling tool has very low overhead (1-2%), so you can profile entire application with perf record ./your_application and then analyze generated profile perf.data with perf report command. You may filter perf report output to some shared libraries or search function names of your plugin. Read more at http://www.brendangregg.com/perf.html
Callgrind is not just a profiler, it is binary recompiler used to implement exact profiler with instrumentation approach and it has 10-20 times overhead for any code, even when profiling tool is not enabled.
Your plugin only runs during certain times, right? Like when the user requests certain activities? People use this method with any IDE, manually pausing during that time. The pause will land in the plugin according to how much time it uses. Before you pause there is no performance impact because the app runs full speed, while in the pause, it is stopped, which you don't care because you're diagnosing your plugin.
I am trying to analyze the performance of a web service project, by running it through WcfTestClient.exe. I have both that exe and the web service project set as targets in a performance analysis session. The exe is set as launch, but not to collect samples. The project is set to collect samples, but not as launch.
Unfortunately, when I try to start the session, I get an error that reads "Profiling 64-bit processes is not supported by this version of the profiling tools. Please use the profiling tools from the x64 directory." What I assume is happening is that the session starts by starting up WcfTestClient.exe, which is a 32 bit application. Because of that, it starts up the 32 bit version of the profiling tools. But my web service project is 64 bit, so when it reaches that point it throws that error since the 32 bit tools can't profile it.
Is there some way to force the session to use the 64 bit tools? Or perhaps there's a 64 bit version of the wcf test client?
Also before anybody calls me on this: I posted about this on the MSDN forums, and got a fairly useless non-answer here: http://social.msdn.microsoft.com/Forums/en-US/vstsprofiler/thread/ff54e1fc-b9fb-47d4-9e9f-a3c552a6f242
Visual Studio 2012RC has the ability to use externally collected trace files of IIS app pool data collected by the IntellitTrace Standalone Collector. I know that in my production app there is some kind of memory leak that is apparent after a few hours of monitoring.
I now have my large iTrace file ready to plug into VS2012, but would like to know how to find the questionable object.
I am also in the process of using the Debugger tools and following these instructions. However, run into an error indicating that the appropriate CLR files (or something like that) are not loaded when trying to do the .load SOS or any other command.
I was hoping to see a similar address list and consumed memory in the IntelliTrace analyzer - is this possible?
Some assistance would be appreciated.
Intellitrace only profiles events and method calls. You won't get information on individual objects or memory leaks because it's not tracking memory. There's also no event provided for object creation/destruction so you can't infer that in any case.
To track memory you will have to use the profiling tools on your app, though don't attach them to your production server! Use a test environment for it and see if you can replicate the problem.