I am profiling memory usgae in my application using ANTS profiler and it shows that large amount of memory is held up in generation 2 by Excell cell object. Application is using EPPlus library to generate excel file.
I am attaching some of the graph generated by the tool. I am not sure how to drill further to figure out which object is holding reference & eating up memory. Please provide your suggestion on how to drill further into the graph.
The problem is: This graph again points tp same object. I think I am not using this tool properly.
Thank you
Firstly- only you, as the developer, know if those Excell objects should be in memory or not (how would an outside observer know whether these were cached objects or not, for example)? You could present info on whether Dispose had been called on those objects perhaps.
Secondly- do you see where it has given you a warning of Large Object hap fragmentation? I'd investigate that first.
Looking at the two snapshots you have taken (from summary screen) I am lacking context of the workflow for each snapshot, So I am going to make some assumptions:
Snapshot 1: is prior to you creating an Excel file.
Snapshot 2: is after you have created the Excel file and you deem this action completed.
Firstly I would recommend you slightly adjust your workflow and do the following snapshots.
Snapshot 1: taken prior to creating an Excel file.
Snapshot 2/3: Depends on the applications workflow; if you create data (can view it etc), then create a file after the fact from this data then take a snapshot after data generation and then another after file creation.
Final snapshot: Take one final snapshot - this is good practice to get a better feel of what is in memory after a completed workflow as it allows for clearing of finalizer queue.
There is going to be no answer here as it is hard to do this from a few screenshots - just advice - I have no knowledge of your application, and making assumptions about your desired memory usage like caching of data etc.
(a) Using the Instance Categorizor view with Categorized References selected you need to start at top(only) reference chain and work right to left. Again making assumptions (simplistic ones) look beyond nodes that are part of the Excel library and see what class is referencing it.
At this point this will either give you enough to look for that reference in code (b) or a starting to point to explore deeper (c). (a) if you think this reference chain(right to left path) is not worth pursuing then move to the next one down. With the Instance Categorizor view you work Right-Left, Top-Botton.
(b) If you have source code you can rght-click on a node and browse to the class in Visual Studio. Or just go there yourself :>
(c) By exploring a reference chain from (a) deeper I mean to using the link "Show the instances on this path" then making a judgment from the displayed metrics (size, Distance from GC root) choose an instance of the class to explore in more details. This will take you to the Instance Retention Graph which shows you the reference chain for that instance in more details. Pay attention to tool tips here, coloured regions and the type of node all mean various things. See my links below.
I think what is clear from this answer is you would likely benefit from looking at the AMP documentation since there is a lot to learn and I have only taken an extremely high level walk through the application with you and I have made many (far to many) assumptions.
See links for some help:
Class view filters
Instance Categorizor
Instance retention graph
Large object heap fragmentation tips <- As pointed out by other poster.
Red Gate's learning portal for the Memory profiler <- Perhaps look at Videos and the Technical Papers sections
I hope that gets you started.
Related
I have a bunch of profiles about my application(200+ profiles per week), I want to analyze them for getting some performance information.
I don't know what information can be analyzed from these files, maybe
the slowest function or the hot-spot code in the different release versions?
I have no relevant experience in this field, has anyone had that?
Well, it depends on the kind of profiles you have, you can capture number of different kinds of profiles.
In general most people are interested in the CPU profile and heap profile since they allow you to see which functions/lines use the most amount of CPU and allocate the most memory.
There are a number of ways you can visualize the data and/or drill into them. You should take a look at help output of go tool pprof and the pprof blog article to get you going.
As for the amount of profiles. In general, people analyze them just one by one. If you want an average over multiple samples, you can merge multiple samples with pprof-merge.
You can substract one profile from another to see relative changes. By using the -diff_base and -base flags.
Yet another approach can be to extract percentages per function for each profile and see if they change over time. You can do this by parsing the output of go tool pprof -top {profile}. You can correlate this data with known events like software updates or high demand and see if one functions starts taking up more resources than others(it might not scale very well).
If you are ever that the point where you want to develop custom tools for analysis, you can use the pprof library to parse the profiles for you.
I am trying to find Landing Spaces as well as Intermediate Landings for Ramps using Revit API, for a few of my models. So far I have tried to find Start and Endpoints of the ramp using location parameters as mentioned in the following blog space (How to find start/end of ramp in revit, perhaps with sketches?) , but it seems that this parameter is unavailable in the models I am trying to use.
I tried to explore any other way using Revit API functionalities but with no visible success so far. Also, I am new to Revit Api's and as such have a limited understanding of what features are available in API.
Can someone help me Identify Ramp Landings or Ramp endpoints?
Ramp Location
Ramp Property Pallete
I see three possible approaches:
Location
Sketch
Geometry
Using the Location property, casting it LocationCurve and grabbing the endpoints from that as suggested in the other discussion you link to would be the optimal solution.
Another approach mentioned in that thread is to use the sketch defining the ramp to determine its start and end points. Getting hold of the sketch elements is not completely trivial, but absolutely doable, as explained in The Building Coder discussions on
Stable Reference Relationships
Retrieving and Snooping Dependent Elements
If all else fails, the final alternative would be to grab and analyse the ramp geometry to deduce the required information from that.
How can I get a list of linked work item IDs for a set of work items?
Excel-hosted queries preferred. API Sample is acceptable.
Direct DB table query is acceptable (read-only and unsupported of course!)
Many thanks in advance! -Zephan
MORE INFORMATION
UPDATE: No answers for my original Q so broadening scope of acceptable answers as follows:
Answer for TFS2015 (migrating very shortly) or TFS2013 (potentially useful for TFS2015) is preferred over TFS2010
Coding acceptable if there are any APIs or PowerShell cmdlets (MS or community).
Connecting directly (read-only!) to TFS DB tables is acceptable (source tables and related relationship link table names). Yes, directly referencing TFS DB tables is VERY unsupported, read-only, and "AT YOUR OWN RISK." Still beats having to manually copy/paste data or reconstruct list of links in Excel.
ORIGINAL QUESTION & DETAILS
My team uses TFS2010 (soon 2013 or hoping 2015) and VS2010-2015. I need to support traceability reports and analyze/quantify our coverage of ~300 Test Case work items linked to ~400 Requirement work items. Direct Link and Tree queries are close but don't give me related links on the same row as parent work item. Many thanks in advance for your suggestions and any related code fragments.
Example:
3 test cases (Test1, Test2, Test3)
4 Requirements (Req1, Req2, Req3, Req4)
For simplicity let's just use TFS work item IDs to represent each TestN and ReqN. In actuality, I have a keyword to identify my validation requirements (separate from the 1,000's of other requirements in this Team Project). The only Test Case WI I care about for this problem are those linked to one or more Validation Requirement trace-ability.
Scenarios:
1:1 (simple) Test1 is linked to Req1
1:2 (1:n) Test2 is linked to Req2 and Req3
2:1 (n:1) Test3 (and Test2) are both linked to Req3
0:1 (Requirement missing Test coverage) Req4 has no test case links
I have a good coverage gap query by creating a Direct Link query for all Requirements then set "linking filters" to Only return items that do not have the specified links.
Desired output (all tests with list of related work items):
|Test1 | Req1 |
|Test2 | Req2, Req3 |
|Test3 | Req3 |
For row #2 I am OK with other separators or even entire list using same separator (.CSV or TAB delimited).
Skip right to answer now if you have a tidy answer. If not then I added considerable RELATED RESEARCH info below to help kick-start an idea that fits the need! Especially since this hasn't been discoverably solved in the last 5 years :-).
RELATED RESEARCH (loooong but may be useful)
1. Visual Studio Queries
Flat Queries should support a list of linked items out-of-the-box... but it does not. RelatedLinkCount field is handy for knowing if there are any links to chase, but that's it for flat queries.
Direct Link queries give a list of all direct links, but the related IDs are on rows below the parent work item. I am seriously considering creating a formula to look on the next X rows to build a list of IDs, but this would be fragile especially when over 3 requirements are linked to same test. Still might solve 80% of my tracing needs.
Tree Queries also show links, but on different rows. Additionally they tend to follow just one link type. Ideally I will need list of User Requirements linked to Functional Requirements linked to Test Case(s).
2. Tools / Plug-ins
SmartExcel4TFS (eDEVTech, http://www.modernrequirements.com/smartexcel4tfs/) has 3 reports it supports, but none get me the core data I need in easily used format. At least it is FREE if you have an MSDN Premium subscription.
Requirements to Tests Trace Matrix is super-interesting. Alass, right now I need to go the other way (Requirements linked to a given test case). Also it merges cells and has sub-sections that are hard to manipulate I think. (I may revisit this option though.)
Intersection Traceability Matrix report is WAY too wide for a full 300 x 400 grid :-O.
Work Item Decomposition Matrix also didn't give me desired contents. (though frankly I've forgotten this report layout from when I checked ~1 month ago.)
3. TFS API calls
I have actually avoided this route in favor of native Excel solution... but if I can get an example of Excel VBA code (or other code with link to calling within Excel) I may go this route. At this point I don't have time to dig into rolling my own... but this would be cool assuming performance is acceptable.
Relevant API/code fragments:
Retrieving TFS Results from a Tree Query (Blogs.msdn.com 2012.02.22) - Looks like this would get me the data I need, but it is not in Excel so I'd need a bridge example of some sort calling this within Excel.
Retrieving work items and their linked work items in a single query using the TFS APIs (stackoverflow.com 2012.01.12) - Also looks very promising, but not connected to Excel. Gives hints for 2 level and 3 level nested links and performance consideration (don't make second call for each item returned!)
Retrieving work items using the Team Foundation Server API (pwee167.github.io 2012.09.18) - Excellently written introductory walkthrough blog posting to learn how to build an (ASP.Net MVC3) app that calls TFS APIs to run Flat or Tree queries. Start here if writing C# (which I could do but don't have time/justification unless easy example to integrate with Excel).
How can I query work items and their linked changesets in TFS? (stackoverflow.com 2011.05.10) - I don't need changesets but this has VB code to instantiate new TfsTeamProjectCollection which might work directly in Excel VBA (assuming proper reference is found and added)
var projectCollection = new TfsTeamProjectCollection(
new Uri("http://localhost:8080/tfs"),
new UICredentialsProvider());
OK, that's everything I have gathered on this problem. Please help contribute with the missing magic tool/snippet or follow the info above to build that last bit I have not had time to prototype & debug. Many thanks in advance!! -Zephan
Updating can performance test script e.g. with LoadRunner can take a lot of time and be quite frustrating. If there has been some updates with the applications, you usually have to run the script and then find out what has to be changed, update and run again and so on. Does anyone have some concrete best practices how to ease this updating inferno? One obvious thing is good communication with developers.
It depends on the kind of updates. If the update is dramatic, like adding new fields for user to fill in, then, someone has to manually touch up the test scripts.
If, however, the update is minor, for example, some changes to the hidden fields or changes to the internal names of user-facing fields, then it's possible to write a script that checks the change and automatically updates the test script.
One of the performance test platforms, NetGend, automatically takes care of the hidden fields and the internal names of user-facing fields so it's very easy to create a script to performance-test a HTML form. Tester only needs to fill in the values that he/she would have to enter using a browser, so no correlation is necessary there. Please send me a message if you need to know more about it.
There are many things you can do to insulate your scripts from build to build variability. The higher up the OSI stack you go the lower the maintenance charge, but the higher the resource cost for the virtual user type. Assuming changes are limited to page level resources and a few hidden fields here and there for web sites or applications, then you can record in HTML mode. You blast the EXTRARES sections as the page parser in HTML mode will automatically parse the page and load the page resources even without an explicit reference - It can be a real pain to keep these sections in synch if you have developers who are experimenting quite a bit.
Next up, for forms which have a very high velocity in terms of change consider the use of a web_custom_request() for the one form. You can use correlation statements to pick up all of the name|value pairs as needed and build the form submit dynamically. There will be a little bit more up front work for this but you should have pay offs at around the fourth changed build where you would normally have been rebuilding some scripts.
Take a look at all of the hosts referenced in your code. Parameterize all of these items. I have a template that I use for web virtual users which pairs a default value and the ability to change any of the host names via the control panel extra attributes section. Take a look at the example for lr_get_attrib_string() for how you might implement the pickup and pair that with a check for NULL and a population with a default value in your code
This is going to seem counter intuitive, but comment your script heavily for changes that are occurring often so you know where to take the extra labor change up front to handle a more dynamic data set.
Almost nothing you do with any tool can save you from struuctural changes in the design and flow of the app, such as the insertion of a new page in the workflow, but paying attention to the design on the high change pages, of which there are typically a small number, can result in a test code with a very long life.
Of course if your application is web services based then there is a natual long life to the use of exposed public services. Code may change on the back end of the service, but typically the exposed public interface is very stable.
I'm dealing with an old Motif application that needs to load and display a long list of entries (around 1500). It creates and manages an instance of xmFormWidgetClass via XtVaCreateManagedWidget() and then it stuffs it with a bunch of linear hierarchies xmFrameWidgetClass->xmFormWidgetClass->xmFormWidgetClass->xmPushButtonWidgetClass. Each PushButton contains a multi-line label. When this this thing is being populated, it takes a lot of CPU, which it spends doing some geometry calculations inside of X/Motif libraries. The pace at which new buttons are added, degrades very quickly. It looks like there is an O(N) algorithm being used inside of XtVaCreateManagedWidget().
The things get much much better if I do XtUnrealizeWidget() on the original instance of the xmFormWidgetClass. Entries are being added at almost constant speed but then I cannot find a way to display the whole thing that I built. XtRealizeWidget() for the original instance of the xmFormWidgetClass does not render it in the window.
What am I doing wrong? Is there a way to populate the hierarchy and then calculate the geometry and render it to the screen at once?
Redesigning the application is an option but it is a last resort type on an option.
Any advice that keeps me within Motif libraries will be highly appreciated!
Regards,
/Sergey
Try calling XtManageChild after XtRealizeWidget.
Try creating all widgets unmanaged and place them on a WidgetList, then call XtManageChildren(). Please see the following reference
http://www.s-and-b.su/syshlp/motif_guide/MotifProgGuide/Making_Widgets_Visible.html
Every time an individual widget is managed the parent changed_managed procedure is called.
XtManageChildren calls the changed_manage procedure only once. This may help.