I'm trying to run mocha tests directly on the WebStorm IDE but without any success.
This is my jsTestDriver.conf file:
server: http://localhost:4224
load:
- node_modules/mocha/mocha.js
- node_modules/chai/chai.js
- node_modules/requirejs/require.js
- mocha-jstd-adapter/src/MochaAdapter.js
- routes/*.js
test:
- test/*.js
timeout: 90
Now I'm taking a look at the buster-format module that works correctly. Any help is appreciated (I'll post the solution if I did it).
Mocha support will be included in WebStorm 7.0.2. You can try the RC build available at http://confluence.jetbrains.com/display/WI/WebStorm+7+EAP
The result is that it's not possible currently.
As stated on the documentation, you can only:
do tests with mocha and other frameworks for front-end javascript with coverage and everything using JSTestDriver or Karma;
do tests with ONLY nodeunit for node.js, so NOT mocha or something else, but you cannot have the coverage with nodeunit either.
Hopefully this will option will be implemented soon.
It's possible to do test with mocha but without code-coverage.
Related
I have project Excellent.js setup for automatic testing with jest and puppeteer, which successfully runs all the tests, which can be seen on Travis CI.
But after a lot of configuration tweaks I have been unable to make it report correct coverage. No matter what tests are executed, the coverage does not reflect it at all.
The library contains only a single JavaScript file excellent.js, and my jest.config.js was set up as instructed for coverage:
module.exports = {
collectCoverage: true,
collectCoverageFrom: [
'src/excellent.js'
],
testURL: 'http://localhost/',
setupFiles: [
'./src/excellent.js'
]
};
Here're all the tests, which all pass if you do first npm install, and then npm test.
So what am I missing? Why can't I get the coverage reported correctly?
ISSUE
Most of the tests are using Puppeteer and when the code is executed in the browser provided by Puppeteer, that code execution is not reflected in the Jest code coverage reports.
SOLUTION
None of the tests require Puppeteer so I refactored them as Jest tests. The code coverage is now accurate and is currently the following:
excellent.js | 63.47 | 48.7 | 57.78 | 62.96
I created a pull request with these changes.
Additional Info
It is now possible to generate code coverage reports for Puppeteer pages and there is a library to help view them in Instanbul but those code coverage reports are generated independently from Jest.
To do testing in Puppeteer pages and have the coverage from those tests reflected in the reports generated by Jest would require merging the Puppeteer page coverage reports with the Jest coverage report.
I'm trying to get coverage report for the API code. I have all the test cases running perfectly in mocha. My problem is that my API server and the test cases are written in separate repositories.
I start my node API server on localhost, on a particular port, and then using supertest in mocha, hit the localhost url to test server's response.
Can you suggest me the best way to generate a coverage report for those APIs?
Testing env
If you want to get coverage, supertest should be able to bootstrap the app server, like in the express example.
The drawback is that you must not run your tests against a running server, like
var api = request('http://127.0.0.1:8080');
but you must include your app entrypoint to allow supertest to start it like
var app = require('../yourapp');
var api = request(app);
Of course, this may (or may not) result in a bit of refactoring on your app bootstrap process.
As other options, you can use node CLI debug capabilities or use node-inspector.
Coverage setup
Supposing you are willing to install istanbul in association with mocha to get coverage.
npm install -g istanbul
then
istanbul cover mocha --root <path> -- --recursive <test-path>
cover is the command use to generate code coverage
mocha is the executable js file used to run tests
--root <path> the root path to look for files to instrument (aka the "source files")
-- is used to pass arguments to your test runner
--recursive <test-path> the root path to look for test files
You can then add --include-all-sources to get cover info on all your source files.
In addition you can get more help running
istanbul help cover
I'm interested in creating full mocked unit tests, as well as integration tests that check if some async operation has returned correctly. I'd like one command for the unit tests and one for the integration tests that way I can run them separately in my CI tools. What's the best way to do this? Tools like mocha, and jest only seem to focus on one way of doing things.
The only option I see is using mocha and having two folders in a directory.
Something like:
__unit__
__integration__
Then I'd need some way of telling mocha to run all the __unit__ tests in the src directory, and another to tell it to run all the __integration__ tests.
Thoughts?
Mocha supports directories, file globbing and test name grepping which can be used to create "groups" of tests.
Directories
test/unit/whatever_spec.js
test/int/whatever_spec.js
Then run tests against all js files in a directory with
mocha test/unit
mocha test/int
mocha test/unit test/int
File Prefix
test/unit_whatever_spec.js
test/int_whatever_spec.js
Then run mocha against specific files with
mocha test/unit_*_spec.js
mocha test/int_*_spec.js
mocha
Test Names
Create outer blocks in mocha that describe the test type and class/subject.
describe('Unit::Whatever', function(){})
describe('Integration::Whatever', function(){})
Then run the named blocks with mochas "grep" argument --grep/-g
mocha -g ^Unit::
mocha -g ^Integration::
mocha
It is still useful to keep the file or directory separation when using test names so you can easily differentiate the source file of a failing test.
package.json
Store each test command in your package.json scripts section so it's easy to run with something like yarn test:int or npm run test:int.
{
scripts: {
"test": "mocha test/unit test/int",
"test:unit": "mocha test/unit",
"test:int": "mocha test/int"
}
}
mocha does not support label or category. You understand correctly.
You must create two folders, unit and integration, and call mocha like this
mocha unit
mocha integration
How can I properly run jasmine tests using jasmine-node and RequireJS?
I already tried something like this, but doesnt work (CoffeeScript):
requirejs = require 'requirejs'
requirejs.config { baseUrl: __dirname + '/../' }
requirejs ['MyClasses', 'FooClass'], (MyClasses, FooClass) ->
describe "someProp", ->
it "should be true", ->
expect(MyClasses.FooClass.someProp).toEqual true
Finished in 0 seconds 0 tests, 0 assertions, 0 failures
My goal is to write modular classes using RequireJS, CoffeeScript and classes must be testable with jasmine-node (CI server).
How can I do that please?
Thank you!
EDIT:
I executing tests with command (at directory with tests):
jasmine-node ./
Jonathan Tran is right, it's the spec in the file name for me.
I have this:
"scripts": {
"install": "cake install",
"test": "node_modules/jasmine-node/bin/jasmine-node --verbose --coffee --runWithRequireJs --captureExceptions spec"
},
in my package.json and I installed jasmine-node from inside the project npm install jasmine-node
Minimal test file called RingBuffer.spec.coffee
require ["disrasher"], (mod) ->
describe "A test", ->
it "should fail", ->
expect(1).toEqual 0
It doesn't actually work at the moment because I haven't got the project hooked up with require properly I don't think. I'll post back here when it does.
If anyone is running into this, much has changed since this question was asked. The first thing to check is still that you're naming your files like thing.spec.coffee.
But if you're running the tests and still seeing the output "0 tests", you need to make a JavaScript file with your requirejs config. This must be JavaScript, not CoffeeScript.
// requirejs-setup.js
requirejs = require('requirejs');
requirejs.config({ baseUrl: __dirname + '/../' });
Then tell jasmine to use this setup file:
jasmine-node --coffee --requireJsSetup requirejs-setup.js ./
One nice thing about this is that you don't need to include the requirejs config in every spec file.
I've tested this on node v12.16, jasmine-node v3.0.0, and requirejs v2.3.6.
It seems that jasmine-node and require.js are completely incompatible. That said, it is possible to run jasmine tests on require.js modules in node using a bit of extra code. Take a look at https://github.com/geddski/amd-testing to see how.
I'm using node with mocha and winston. Is there a way to set it up so it only shows logs for failing tests?
If you run with the min reporter you will only get full output on failed tests: mocha -R min or, if you prefer the verbose option, mocha --reporter min.
As of writing (in 2022), there's now an npm package that does exactly this:
https://www.npmjs.com/package/mocha-suppress-logs
I like it because I like the output from the default mocha reporter. It keeps all that, but hides console output for succeeding tests.
Possible to use
if (!expect(next.called).to.be.true) {
console.log("... further information")
}