How can I get NDepend to fail analysis if new calls are used to a deprecated type? - cql

We have a type named OldThing which we want to deprecate over time.
We need an NDepend query/rule that says from this point on, don't add any more calls to 'OldThing'.
We currently use NDepend and have a baseline build for checking things like don't make large methods even larger.
So, we'd like to use NDepend to track any additional calls made to OldThing. I have the following CQL query:
// <Name>Don't use OldThing going forwards</Name>
warnif count > 0
let containsMethods = Methods.WithFullNameIn(
"MyNamespace.OldType.get_Foo()",
"MyNamespace.OldType.get_Bar()")
from m in Application.Methods.UsingAny(containsMethods)
where m.IsUsedRecently()
select m
... the trouble is, it doesn't seem to work; it doesn't find any new calls.
Is there a better way of doing this in NDepend (perhaps by utilising trend metrics)?

You don't need where m.IsUsedRecently(), this is only for third-party method calls.
Then you need to double check that the let expression is matching the proper deprecated methods (you could also match them all at once by using the ObsoleteAttribute).
Finally you should make this rule critical and it should work :)

Related

NUnit Attribute to simulate condition-based Assert.Inconclusive with custom message text

I have some tests that depend on a certain thing being true (access to the internet, as it happens, but that isn't important and I don't want to discuss the details of the condition).
I can very easily write a static helper method which will test the (parameterless) condition and call Assert.Inconclusive("Explanatory Message") if it's true/false. And then call that at the start of each Test which has this requirement.
But I'd like to do this as an Attribute, if possible.
How, in detail, do I achieve that, in NUnit?
What I've tried so far:
There's an IApplyToTest interface, exposed by NUnit, which I can make my Attribute implement, and will allow me to hook into the TestRunner, but I can't get it to do what I want :(
That interface gives me access to an NUnit.Framework.Internal.Test object.
If I call:
test.RunState = RunState.NotRunnable;
then I get something equivalent to Assert.Fail("").
Similarly RunState.Skipped or RunState.Ignored give me the equivalent of Assert.Ignore("").
But none of these are setting a message on the Test, and there's no test.Message = "foo"; or equivalent (that I can see).
There's a test.MakeInvalid("Foo") which does set a message, but that's equivalent to Assert.Fail("Foo").
I found something that looked promising:
var result = test.MakeTestResult();
result.SetResult(ResultState.Inconclusive, "Custom Message text");
But that doesn't seem to do anything; the Test just Passes :( I looked for a test.SetAsCurrentResult(result) method in case I need to "attach" that result object back to the test? But nothing doing.
It feels like this is supposed to be possible, but I can't quite figure out how to make it all play together.
If anyone can even show me how to get to Skipped + Custom Message displayed, then I'd probably take that!
If you really want your test to be Inconclusive, then that's what Assume.That is there for. Use it just as you would use Assert.That and the specified constraint fails, your test result will be inconclusive.
That would be the simplest answer to your question.
However, reading the things you have tried, I don't think you actually want Inconclusive at least not as it is defined by NUnit.
In NUnit, Inconclusive means that the test doesn't count because it couldn't be run. The result basically disappears and the test run is successful.
You seem to be saying that you want to receive some notice that the condition failed. That makes sense in the situation where (for example) the internet was not available so your test run isn't definitive.
NUnit provides Assert.Ignore and Warn.If (also Warn.Unless) for those situations. Or you can set the corresponding result states in your custom attribute.
Regarding implementation... The RunState of a test applies to it's status before anyone has even tried to execute it. So, for example, the RunState may be Ignored if someone has used the IgnoreAttribute or it may be NotRunnable if it requires arguments and none are provided. There is no Inconclusive run sttate because that would mean the test is written to be inconclusive all the time, which makes no ssense. The IApplyToTest interface allows an attribute to change the status of a test at the point of discovery, before it is even run, so you would not want to use that.
After NUnit has attempted to run a test, it gets a ResultState, which might be Inconclusive. You can affect this in the code of the test but not currently through an attribute. What you want here is something that checks the conditions needed to run the test immediately before running it and skips execution if the conditions are not met. That attribute would need to be one that generates a command in the chain of commands that execute a test. It would probably need to implement ICommandWrapper to do that, which is a bit more complicated than IApplyToTest because the attribute code must generate a command instance that will work properly with NUnit itself and with other commands in the chain.
If I had this situation, I believe I would use a Run parameter to indicate whether the internet should be available. Then, the tests could
Assume.That(InternetIsNotNeeded());
silently ignoring those tests or fail as expected when the internet should be available.

Use a puppet module multiple times

I'm using a puppet module from Puppet Forge - https://forge.puppet.com/creativeview/mssql_system_dsn
The documentation indicates to use it like this:
class {'mssql_system_dsn':
dsn_name => 'vcenter',
db_name => 'vcdb',
db_server_ip => '192.168.35.20',
sql_version => '2012',
dsn_64bit => true,
}
I need to create multiple odbc data sources.
However, if I simply duplicate this snippet twice and change the parameters I get a multiple declaration error.
How can I declare this module multiple times?
How can I declare this module multiple times?
You cannot do so without modifying the module. Although it is possible to declare the same class multiple times if you use include-like syntax, that does not afford a means to use different parameters with different declarations. This is all connected to the fact that Puppet classes are singletons. I can confirm based on a quick review of the module's code that its design does not support defining multiple data sources.
I'd encourage you to file an enhancement request with the module author. If that does not quickly bear fruit, then you have the option of modifying the module yourself. It looks like that would be feasible, but not as simple as just changing a class keyword to define.
As the author didn't answer my request and had not merged a pull request from another contributor I created my own module;
https://forge.puppet.com/garfieldmoore/odbc_data_source
If anyone is interested enough to review my module's code and offer improvements or let me know when I have not followed best practises I would appreciate it

Factory Pattern for Common.js and Node.js

I got to the point that I'd like to have a factory to manage all my dependencies for the modules in a single place instead of having a lot of statements using require all over the place in my code.
I've looked at some approaches that rely on AMD, but I'd like to know how to do it by using node.js / express combination with the OOB module loader which I think it uses common.js.
I've been thinking of doing something like this:
module.exports = {
lib:[],
load:function(name){
if(this.lib[name]!==undefined && this.lib[name]!==null){
return this.lib[name];
}
switch(name)
{
case 'express':
this.lib[name] = require('express');
break;
case 'morgan':
this.lib[name] = require('morgan');
break;
case 'body-parser':
this.lib[name] = require('body-parser');
break;
}
console.log(this.lib);
return this.lib[name];
}
};
Some people say that's more than a factory its a mediator pattern, so either way I just wanted to illustrate my point.
my basic requirement is to handle all the dependencies from a single place in the system if I need to change a dependency I just change it on this file and automatically updates through the whole system.
so is there a better way to handle this? any Implementation that already have done this approach?
thanks!
Technically this is what require() does internally.
require('foo'); require('foo')
guarantees that it will load and run foo only once. The second call will return a cached copy from its internal array.
You can achieve the same naming indirection (and an API adapter if you'll ever decide to change the implementation without changing callers) by requiring JS files or your node modules that re-export modules you actually use (e.g. require('./my-express-wrapper') instead of require('express')).
if I need to change a dependency I just change it on this file and automatically updates through the whole system.
I'd be concerned that it will cause code to be surprising:
require('factory').load('body-parser'); // loads Formidable!?
I see little benefit in having such layer of indirection:
Even in the best case of drop-in-replacement it saves very little work, because project-global find'n'replace of require('foo') with require('bar') is an easy task in most text editors.
The hard part of replacing module (which is unlikely to be 100% compatible) is getting existing code to correctly work with it. This is not avoided by use of the factory pattern. You'll need to write an adapter either way, and sometimes it may even be better to actually change uses of the module everywhere than to write an emulation layer for an API that probably wasn't good anyway.

Metaprogramming: adding equals(Object o) and hashCode() to a library class

I have a library of domain objects which need to be used in the project, however we've found a couple of the classes haven't got an equals or hashCode method implemented.
I'm looking for the simplest (and Grooviest) way to add those methods. Obviously I could create a subclass which only adds the methods, but this would be confusing for developers used to the library and would mean we'd have to refactor existing code.
It is not possible to get the source changed (currently).
If I could edit the class I would just use the #EqualsAndHashCode annotation to carry out an AST transformation (at compile time?), but I can't find a way to instruct the compiler to carry out the transformation on a class which I can't directly annotate.
I'm currently trying to work up an example using the ExpandoMetaClass, so I'd do something like:
MySuperClass.metaClass.hashCode = { ->
// Add dynamic hashCode calculation bits here
}
MySuperClass.metaClass.equals = { ->
// Add dynamic hashCode calculation bits here
}
I don't really want to hand-code the hashCode/equals methods for each class, so I'm looking for something dyamic (like #EqualsAndHashCode) which will work with this.
Am I on the right track? Is there a groovier way?
AST Transforms are only applied at compile time, so you'll get no help from the likes of #EqualsAndHashCode. MetaClass hacks are going to be your only option. That said, there are more-elegant ways to impose MetaClass behavior.
Shameless Self Plug I did a talk about this kind of stuff last year at SpringOne 2GX: http://www.infoq.com/presentations/groovy-app-architecture
In short, you might find benefit in creating extensions (unless you're in Grails) - http://mrhaki.blogspot.com/2013/01/groovy-goodness-adding-extra-methods.html, or by explicitly adding mixins - http://groovy.codehaus.org/Runtime+mixins ... But in general, these are just cleaner ways to do the exact same thing you're already doing.

Ignore certain TypeScript compile errors?

I am wondering if there is a way to ignore certain TypeScript errors upon compilation?
I basically have the same issues most people with large projects have around using the this keyword, and I don't want to put all my classes methods into the constructor.
So I have got an example like so:
TypeScript Example
Which seems to create perfectly valid JS and allows me to get around the this keyword issue, however as you can see in the example the typescript compiler tells me that I cannot compile that code as the keyword this is not valid within that scope. However I don't see why it is an error as it produces okay code.
So is there a way to tell it to ignore certain errors? I am sure given time there will be a nice way to manage the this keyword, but currently I find it pretty dire.
== Edit ==
(Do not read unless you care about context of this question and partial rant)
Just to add some context to all this to show that I'm not just some nut-job (I am sure a lot of you will still think I am) and that I have some good reasons why I want to be able to allow these errors to go through.
Here are some previous questions I have made which highlight some major problems (imo) with TypeScript current this implementation.
Using lawnchair with Typescript
Issue with child scoping of this in Typescript
https://typescript.codeplex.com/discussions/429350 (And some comments I make down the bottom)
The underlying problem I have is that I need to guarantee that all logic is within a consistent scope, I need to be able to access things within knockout, jQuery etc and the local instance of a class. I used to do this with the var self = this; within the class declaration in JavaScript and worked great. As mentioned in some of these previous questions I cannot do that now, so the only way I can guarantee the scope is to use lambda methods, and the only way I can define one of these as a method within a class is within the constructor, and this part is HEAVILY down to personal preference, but I find it horrific that people seem to think that using that syntax is classed as a recommended pattern and not just a work around.
I know TypeScript is in alpha phase and a lot will change, and I HOPE so much that we get some nicer way to deal with this but currently I either make everything a huge mess just to get typescript working (and this is within Hundreds of files which I'm migrating over to TypeScript ) or I just make the call that I know better than the compiler in this case (VERY DANGEROUS I KNOW) so I can keep my code nice and hopefully when a better pattern comes out for handling this I can migrate it then.
Also just on a side note I know a lot of people are loving the fact that TypeScript is embracing and trying to stay as close to the new JavaScript features and known syntax as possible which is great, but typescript is NOT the next version of JavaScript so I don't see a problem with adding some syntactic sugar to the language as people who want to use the latest and greatest official JavaScript implementation can still do so.
The author's specific issue with this seems to be solved but the question is posed about ignoring errors, and for those who end up here looking how to ignore errors:
If properly fixing the error or using more decent workarounds like already suggested here are not an option, as of TypeScript 2.6 (released on Oct 31, 2017), now there is a way to ignore all errors from a specific line using // #ts-ignore comments before the target line.
The mendtioned documentation is succinct enough, but to recap:
// #ts-ignore
const s : string = false
disables error reporting for this line.
However, this should only be used as a last resort when fixing the error or using hacks like (x as any) is much more trouble than losing all type checking for a line.
As for specifying certain errors, the current (mid-2018) state is discussed here, in Design Meeting Notes (2/16/2018) and further comments, which is basically
"no conclusion yet"
and strong opposition to introducing this fine tuning.
I think your question as posed is an XY problem. What you're going for is how can I ensure that some of my class methods are guaranteed to have a correct this context?
For that problem, I would propose this solution:
class LambdaMethods {
constructor(private message: string) {
this.DoSomething = this.DoSomething.bind(this);
}
public DoSomething() {
alert(this.message);
}
}
This has several benefits.
First, you're being explicit about what's going on. Most programmers are probably not going to understand the subtle semantics about what the difference between the member and method syntax are in terms of codegen.
Second, it makes it very clear, from looking at the constructor, which methods are going to have a guaranteed this context. Critically, from a performance, perspective, you don't want to write all your methods this way, just the ones that absolutely need it.
Finally, it preserves the OOP semantics of the class. You'll actually be able to use super.DoSomething from a derived class implementation of DoSomething.
I'm sure you're aware of the standard form of defining a function without the arrow notation. There's another TypeScript expression that generates the exact same code but without the compile error:
class LambdaMethods {
private message: string;
public DoSomething: () => void;
constructor(message: string) {
this.message = message;
this.DoSomething = () => { alert(this.message); };
}
}
So why is this legal and the other one isn't? Well according to the spec: an arrow function expression preserves the this of its enclosing context. So it preserves the meaning of this from the scope it was declared. But declaring a function at the class level this doesn't actually have a meaning.
Here's an example that's wrong for the exact same reason that might be more clear:
class LambdaMethods {
private message: string;
constructor(message: string) {
this.message = message;
}
var a = this.message; // can't do this
}
The way that initializer works by being combined with the constructor is an implementation detail that can't be relied upon. It could change.
I am sure given time there will be a nice way to manage the this keyword, but currently I find it pretty dire.
One of the high-level goals (that I love) in TypeScript is to extend the JavaScript language and work with it, not fight it. How this operates is tricky but worth learning.

Resources