Team City Code Coverage includes excluded code in total - statistics

We have this issue with the Code Coverage in Team City (using dotCover).
In the the coverage of statements, the percentage is calculated as covered/Total.
Which is good.
When we exclude neamespaces (e.g. the test code and 3rd party products), the excluded namespaces are still listed but with the covered metric is 0 (zero) which is what to expect.
However, the Total metric is not zero for excluded namespaces and is included in the overall Total metric. This scews the coverage metrics!
Screenshot of the problem:
How do I get around this?
(This is TC 7.1.2)

Move 3rd party code into separated assembly.

Related

Acumatica and Unit Cost

As background, we are very reliant on the serial items and every single serial item has it's own unit cost and price, etc...We are on 2017r2 for now.
We have had to overwrite the standard acumatica functionality that tries to pick up the stock items last cost to use that as it's unit cost fo that form/document. We have seen this on most documents in the inventory and distribution modules...
Anyway, we have realized that we missed a few spots and now we have a massive project to try and identify all of the items that were out of sync and correct them.
My questions/request for help is the following:
1) Is there any way to know all of the places that could effectively change what Acumatica has as the unit cost for a serial item? I know this happens on inventory receipts, purchase receipts, Adjustments, items that come out of the Manufacturing module, etc... Pretty much everywhere where it ends with a receipt of some sort i guess. I see this is also the case for inventory receipts on a two step transfer (This is the one that got us, we didn't realize that would happen).
2) Is there anyway on a more global level within Acumatica to have it automatically pick up the serial level unit cost as opposed to ever using the stock item cost statistics?
3) Does anyone have any ideas as to how to try and identify all of the possible items that may be effected, and how to resolve them once we identify them? Thankfully, we do have some custom fields that will hopefully have the correct unit cost, but we will still need to adjust these items back to their correct unit cost. But any ideas would be greatly appreciated.
I hope I explained the above clearly, and thank you in advance for anyone kind enough to help us out.
Please find the detailed explanation of the Item Costing in Acumatica by this link.
I think you can consider the generation of the Inventory Adjustment for all the Items with corresponding serial numbers, 0 quantity and fixed amount to correct all the costs without checking if the cost is correct or not. Please create backup snapshots before trying to generate such adjustments and check it on the test instance.

Why it's not a good idea to set different population sizing for test and holdout?

So we are trying to run an A/B testing with changes applying to 10% of the population. My PM told me we should set a separate control team at 10% too so we can compare the 10% test and 10% control later. My question is why is it a better idea to compare the 10% test vs and 10% control instead of 10% test vs the 90% rest population, which is essentially everyone who don't see the changes?

Obtaining a Referenced Assembly's Call Count

I am in the process of writing a document analyzing a large codebase for quality and maintainability. As part of this report I wish to include a count of the number of references an assembly makes to another assembly within the solution. This will give an idea of how tightly coupled each assembly is to another.
Is there a tool in Visual Studio 2015 Enterprise (or 3rd Party Plug-In) that can give me this number?
So far I have tried Visual Studio's Code Map tool but this appears to just generate a visualization with arrows which I would then have to count manually and futhermore this only appears to be to class/struct-level, not the number of individual references within each class/struct.
NDepend (http://www.ndepend.com/) offers this functionality. It can be also be quite helpful in more general terms for the type of exploratory quality analysis that you describe.
You can use FxCop / Code Analysis to do this, this has a number of maintainability rules, of most interest to you would probably be:
CA1506: Avoid excessive class coupling
This rule measures class
coupling by counting the number of unique type references that a type
or method contains.
I believe the thresholds are 80 for a class and 30 for a method.
It's relatively easy to set up, basically you just need to configure it on a project:
Opening the ruleset lets you choose which ones to run (and whether they are warnings or errors), there are many, many rules.
To expand upon Nicole's answer, I have tested the trial of NDepend and I believe I have found the figures I was looking for in something they call the "Dependency Matrix". My understanding of it is as follows.
The numbers in green are a count of how many times the assembly in the current row references the assembly relating to the number in the current column. The numbers in blue are a count of how many times the assembly in the current row is referenced by the assembly relating to the number in the current column. Since an assembly cannot make an external reference to itself, no numbers can appear on the diagonal line.
What I do no understand however is why, for example, the number in cell 0, 4 is 93 but the number in cell 4, 0 is 52; shouldn't these numbers be equal? Assembly 0 is only used by assembly 4 the same number of times as assembly 4 uses assembly 0 - how can these numbers be different?
UPDATE: I have watched a PluralSight video on this tool and found out that the number in the green box represents how many methods in the referencing assembly make reference to the referenced assembly. The number in the corresponding blue box represents how many methods in the referenced assembly are being used by the referencing assembly. Neither of these numbers precisely represent the number of calls one assembly makes to another (since a method could contain multiple references) but I believe it does provide a sufficient level of granuarity anyway since methods should conform to SRP and thus all references within a method should relate to a single behaviour.

Comparing two different sentences having same meaning in Excel

I have two cells.
Cell 1 contains this value --> Portfolio Rule Failure (Justification Required): Style Sector Structure: 0.93% for MUNI - SENIOR LIVING breaks the 0.00% maximum failure limit. Style Min Security Rating: NR breaks the BBB- minimum failure limit.
Cell 2 contains this value --> Hard Rule Failure (Requires Portfolio Rule Justification to override): Sector Max Weight % - Style failed: MUNI - SENIOR LIVING: 0.93% Min None Max 0% Min Security Rating - Style failed: Worse Than BBB-: 0.93% Min None Max 0%
If you read, both depicts the same meaning. If i try to compare both these in excel, it will say that both are different. But actually they have same meaning though the words used are different. Is there a way in excel or some data analysis tools to say that both are same?
One way to replace the similar pattern words in one of the columns with the other, but I have 1000s of records like this, hence it might be nearly impossible to update these manually.
Please advice.
Here's an approach you might try: If you can get the complete inventory of all possible messages or message patterns into a dedicated worksheet and do the identification of duplicates there, and provide a standard definition then use vlookup to grab that standard
essentially you build a dictionary that serves to interpret the messages once, and then refer to it as needed.
You may need to parse the original message into logical pieces like
Message type eg hard failure, warning, etc
Attribute that triggered the message eg MUNI - SENIOR LIVING
Reason, eg failure limit exceeded

Profit maximization given list of projects and list of sources needed to complete each project

We are given list of projects and cash rewards for project completion. Each project has a list of sources needed to complete the project. If we decide to buy a source, we need to pay the listed price once and we can use it for all projects. Each source can be required in more than one project. If we buy all necessary sources that project requires, the project is complete and we receive stated cash reward. We need to find an algorithm (not code) to maximize the profit.
I can think of one heuristics only.
First of all I go through all lines and check if there are any lines, that if I buy all the sources required to complete the project I have no loss of money (I either earn some money or have zero balance). If I find such line I buy all required sources.
But such line may or may not exists and I end up trying all possible subsets. I still think it could be solved in polynomial time. I can't think of any way how to apply some network flow algoritm. Any ideas?

Resources