The logEnable is write in config.js, is there any way to change the value during testing? So than I can improve the branch coverage.
You could ignore parts of code from testing: https://github.com/gotwarlost/istanbul/blob/master/ignoring-code-for-coverage.md
Skip an if or else path with /* istanbul ignore if */ or /* istanbul ignore else */ respectively.
For all other cases, skip the next 'thing' in the source with: /* istanbul ignore next */
Or add a single test that checks just those logging functions with both logging enabled and disabled (you can override required modules, like your config, for example with proxyquire: https://github.com/thlorenz/proxyquire).
Related
Say I want to use this rule:
https://eslint.org/docs/rules/no-debugger
however, I have about 15 files where I want to keep debugger; statements.
Is there something I can add at the top of the .ts/.js files, that can tell ESLint to ignore the no-debugger rule for this particular file?
You can also disable ESLint in the same line:
debugger; // eslint-disable-line no-debugger
You can do that like this:
/* eslint-disable no-debugger */
... code that violates rule ...
/* eslint-enable no-debugger */
Update your eslint configuration file (.eslintrc etc) with such rule:
"rules": {
"no-debugger":"off"
}
Probably your IDE can help. If you are using VS Code, you can mouse over debugger, click Quick Fix..., and select Disable no-debugger for the entire file, as shown as below:
Then the IDE will add the following comment at the top of the file to disable the rule for you:
/* eslint-disable no-debugger */
See more on the eslint no-debugger rule.
OR, add:
"no-debugger": false
to the bottom of tslint.json, to disable this warning for all files.
I think debugger need to removed, and briefly used in a development environment.
So you're better off ignoring it where you use.
For example:
disable no-debugger for this line
// eslint-disable-next-line no-debugger
debugger
or disable no-debugger for the entire file
/* eslint-disable no-debugger */
Since the last VSCode or Eslint extension update, the whole object is underlined when an error or a warning is detected. An idea why ?
I’m guessing you’re exporting this function from Webpack configuration module. I can think of two options that come to play.
First, define named function:
module.exports = function withBaseConfiguration() {
return { /* configuration goes here */ }
}
Second, disable the ESLint rule for this file by placing following at the top:
/* eslint-disable func-names */
I am a newbie in unit testing.
I am performing unit testing on each function in my custom NodeJS package.
Suppose that my package exports functions as follows,
//my-package.js
module.exports = {
listFiles,
copyFiles
}
/**
* #return <list> a list of files under src_dir
*/
function listFiles(src_dir){
//stat all files under src_dir and make a list containing all file names
...
}
/**
* #param [allowed_ext] a list of file extensions
*/
function copyFiles(src_dir, dst_dir, allowed_ext = []){
//filter files under src_dir according to allowed_ext
var files = listFiles(src_dir).filter(function(){...})
for(var a_file of files){
//other operations on each single file
...
}
}
I want to do unit test on both listFiles() and copyFiles(); however, copyFiles() actually depends on listFiles(). What is the best practice to write a unit test for these functions?
There are few option that you can choose:
You can test listFiles independently and then test copyFiles (that uses listFiles)
You can test listFiles and then test copyFiles with a mocked version of listFiles injected somehow (there are few ways to do that)
You could test copyFiles only if those tests also cover the listFiles in 100%
I would personally recommend doing a mix of 1 and 2.
Now, as to how to do that - if you e.g. use Jasmine you can create a mocked functions with spies:
https://jasmine.github.io/2.0/introduction.html#section-Spies
To mock require statements you could use mock-require:
https://www.npmjs.com/package/mock-require
And I always recommend using a coverage tool like nyc or istanbul:
https://www.npmjs.com/package/nyc
https://www.npmjs.com/package/istanbul
In some of my projects I enforce 100% coverage as a part of npm test and it works pretty well to make sure that everything is tested extensively.
Hint: It will be easier to mock the functions used by other functions if every one of them is in its own file but it's not the only way to do that.
In compiling languages like C we have a preprocessor that can be used to skip parts of program without compiling them, effectively just excluding them from the source code:
#ifdef SHOULD_RUN_THIS
/* this code not always runs */
#endif
So that if SHOULD_RUN_THIS is not defined, then the code will never be run.
In node.js we don't have a direct equivalent of this, so the first thing I can imagine is
if (config.SHOULD_RUN_THIS) {
/* this code not always runs */
}
However in node there is no way to guarantee that config.SHOULD_RUN_THIS will never change, so if (...) check will be performed each time in vain.
What would be the most performant way to rewrite it? I can think of
a) create a separate function to allow v8-optimizations:
function f(args) {
if (config.SHOULD_RUN_THIS) {
/* this code not always runs */
}
}
// ...
f(args);
b) create a variable to store the function and set it to an empty function when not needed:
var f;
if (config.SHOULD_RUN_THIS) {
f = (args) => {
/* this code not always runs */
}
}
else {
f = function () {} // does nothing
}
// ...
f(args);
c) do not create a separate function, just leave it in place:
if (config.SHOULD_RUN_THIS) {
/* this code not always runs */
}
What is the most performant way? Maybe some other way...
i personally would adopt ...
if (config.SHOULD_RUN_THIS) {
require('/path/for/conditional/module');
}
the module code is only required where needed, otherwise it is not even loaded in memory let alone executed.
the only downside is that it is not readily clear which modules are being required since your require statements are not all positioned at the top of the file.
es6 modularity adopts this dynamic module request approach.
PS use of config like this is great since, you can for example, use an environment variable to determine your code path. Great when spinning up, for example, a bunch of docker containers that you want to behave differently depending on the env vars passed to the docker run statements.
apologies for this insight if you are not a docker fan :) apologies i am waffling now!
if you're looking for a preprocessor for your Javascript, why not use a preprocessor for your Javascript? It's node-compatible and appears to do what you need. You could also look into writing a plugin for Babel or some other JS mangling tool (or v8 itself!)
If you're looking for a way to do this inside the language itself, I'd avoid any optimizations which target a single engine like v8 unless you're sure that's the only place your code will ever run. Otherwise, as has been mentioned, try breaking out conditional code into a separate module so it's only loaded if necessary for it to run.
I have a bunch of unit tests in this folder: src/app/tests/. Do I have to list them individually in intern.js or is there a way to use a wildcard? I've tried
suites: [ 'src/app/tests/*' ]
but that just causes the test runner to try to load src/app/tests/*.js. Do I really have to list each test suite individually?
The common convention is to have an all module which collects your test modules, e.g.:
define([
'./module1',
'./module2',
// ...
], function(){});
Then you simply list the all module in the suites array, like this:
suites: [ 'src/app/tests/all' ],
Generally this is no different from the standard practice with DOH in Dojo 1.x either, other than being under a different module name. AMD loaders do not support globbing in module IDs, so this isn't really a direct limitation of Intern.
It may seem onerous, but ordinarily you would add each module to all.js as you create it, so it's not really that much additional work.
I agree that the verbosity and inflexibility of this configuration is annoying and hard to scale.
While it's not the same as a wildcard, here is how I solve that problem.
Modified intern.js config file:
define(
[ // dependencies...
'test/all'
],
function (testSuites) {
suites: testSuites.unit,
functionalSuites: testSuites.functional,
}
)
The power in this comes from the fact that the test/all module can return whatever it wants to. Simply give it some nicely named properties which are arrays of module ID strings and you are ready to rock.
Specifying test modules in the define() dependency array of a module given to suites or functionalSuites does work. But that is not very flexible. It still requires you to cherry-pick test suites and be careful about commas and which ones are commented out, etc. What you really want are named collections that can be exported. I do that like so...
test/all:
define(
[ // dependencies...
'./unitsuitelist' // array of paths, generated by hand or Grunt, etc.
'./funcsuitelist'
],
function (unitSuites, funcSuites) {
var experiments,
funTests,
usefulTests,
oldTests
// any logic you want to construct great collections of test suites...
myFavoriteUnitSuites = funTests.concat(experiments);
myFavoriteFunctionalSuites = usefulTests.concat(oldTests);
return {
unit: myFavoriteUnitSuites
functional: myFavoriteFuncSuites
}
}
)
Just make the necessary logic one time with a few reasonable collections. Then swap them out in the returned object during development. And if you prefer to change lists of module IDs instead of code, this pattern can still help you. It's easy to auto-generate a list of all test suite file locations within their directories using bash, Grunt, or other tools. This can be automatically fed into the intern.js configuration file with a similar pattern to the one above. Just remove the logic and it can effectively be a wildcard. If each category of test suite (unit and functional) lives in its own directory, it is very easy to generate path lists of all files contained within them.