Do we have a code coverage tool that detects how much code is covered. I don't want to have any testing framework here. Just the way users use it, it should be able to give real time code coverage details. Is it possible?
Well, yes – just run your application under a suitable coverage runner such as nyc:
E.g. if you'd start your app with npm start, have nyc installed and run with
nyc --reporter=lcov npm start
Of course you'll need to run for a while (so your users get to cover your app), and then capture the LCOV/HTML report generated.
Related
We use Cypress for thorough e2e testing on our site.
The Tech stack is React + Node(koa.js).
We have a high test coverage since we tend to mock most of the user actions (most of the crud methods as well).
It happens sometimes that a test suite fails during the execution (or is interrupted by something) so we have a duplicated entry on the next run (create test fails). Then I need to manually delete testing entries from the site and re-run the pipeline.
We want to make sure that we have a clean database for testing on each run. I could use an advice. Thanks in advance!
I am working on an Excel Add-In with custom functions using the Javascript API. I have followed this tutorial.
I am trying to debug this using the Web version of Excel as the logging capabilities are significantly better, however I am finding that it will never register changes in my functions.ts file. I can change any other code file (eg. taskpane.ts) and will see the changes reflected immediately, however whenever I try to reload the custom functions, I do not see any of the changes.
The commands I'm using:
npm run build followed by
npm run watch in one terminal, npm run start:web in another.
This is the same whether or not I run npm run watch in one terminal or not.
In order to observe any changes I need to completely restart the entire server and reload the plugin.
This makes for a pretty miserable development experience. Has anyone overcome issues like these, or have suggestions as to how I can improve the development process for Excel add-ins?
I would also like to develop using the Desktop version of excel, however do to the lack of decent logging capabilities, this doesn't seem too feasible.
Sorry to hear you are having problems. Can you please try the following:
Open the project manifest and remove 'dist' from the functions.html url as follows:
Run "npm run build" again and then "npm start" again
Could anyone, please, recommend simple test coverage tool for Node.js without using Mocha because I use simple functions from assert module of Node in my own test framework.
The standard library for code coverage is nyc, so that's my suggestion; however, you aren't going to get code coverage to work with a test framework you came up with yourself for free -- you'll need to make sure you are instrumenting the appropriate source files so the report contains what you expect.
The following links, which discuss non-standard uses of nyc (non-standard meaning, not jasmine or mocha) might be useful - one for classic istanbul, one for the newer nyc:
https://github.com/gotwarlost/istanbul/issues/574
https://github.com/istanbuljs/nyc/issues/548
You can use Jest and Supertest.
Jest -> https://jestjs.io/
SuperTest -> https://www.npmjs.com/package/supertest
Documentation is pretty sparse on doing coverage with istanbul for integration tests. When I run through my mocha tests, I get No coverage information was collected, exit without writing coverage information.
The first thing I do is instrument all my source code:
✗ istanbul instrument . -o .instrument
In my case, this is a REST microservice that is Dockerized which I have written Mocha tests to run against it to validate it once it is deployed. My expectation is istanbul will give me code coverage against the source from that Node service.
The second step I do this command to run node on my instrumented code:
✗ istanbul cover --report none .instrument/server.js
After that, I run my tests using the following from the my main src directory as follows (with results):
✗ istanbul cover --report none --dir coverage/unit node_modules/.bin/_mocha -- -R spec ./.instrument/test/** --recursive
swagger-tests
#createPet
✓ should add a new pet (15226ms)
#getPets
✓ should exist and return an Array (2378ms)
✓ should have at least 1 pet in list (2500ms)
✓ should return error if search not name or id
✓ should be sorted by ID (3041ms)
✓ should be sorted by ID even if no parameter (2715ms)
✓ should be only available pets (2647ms)
#getPetsSortedByName
✓ should be sorted by name (85822ms)
#deletePet
✓ should delete a pet (159ms)
9 passing (2m)
No coverage information was collected, exit without writing coverage information
When I run istanbul report, it obviously has nothing to report on.
What am I missing?
See develop branch of this project to reproduce issue.
The owner of istanbul helped me to resolve this. I was able to get things going by performing the following steps:
Skip instrumenting the code; it's not needed
Call istanbul with --handle-sigint as #heckj recommended and remove the flag --report none
Once your server is up, just run tests as normal: ./node_modules/.bin/_mocha -R spec ./test/** --recursive
Shutdown the server from step 2 to output the coverage
View the HTML file in open coverage/lcov-report/index.html
This looks like you were following the blog post I was just looking at when trying to figure out how to attack this time problem:
Javascript Integration Tests Coverage with Istanbul
I don't what specifically what is different between what you've posted above, and what that blog post identifies. One thing to check is to make sure that there are coverage*.json files getting generated when the code is being executed. I'm not sure when those files are specifically generated by Istanbul, so you may need to terminate the instrumented code running. There's also a mention of a --handle-sigint option on the cover command in the README that hinted at needing to invoke a manual SIGINT interupt to get coverage information on a long running process.
Looking at one of the bugs, there's obviously been some pain with this in the past, and some versions of istanbul had problems with "use strict" mode in the NodeJS code.
So my recommendation is run all the tests, and then make sure the processes are all terminated, before running the report command, and checking to see if the coverage*.json files are written somewhere. Beyond that, might make sense to take this as an issue into the github repo, where there appears to be good activity and answers.
For nodejs backend server code Unit Testing, I am using node-qunit with grunt.
Is there any code coverage tool using node-qunit module?
Maximum code coverage tool I am seeing needs headless browser support, ex. PhantomJS, but if I run using this, then I get syntax errors for nodejs keywords, like "ReferenceError: Can't find variable: require" etc.
So which tool I can use for code coverage for nodejs backend code testing using node-qunit.
If you're solely testing backend code, there's no need to run the tests in a headless browser like PhantomJS. For running code coverage analysis in node, I can recommend istanbul.
But I'm not sure if it works out of the box with node-qunit. However mocha is a popular node.js test runner with a qunit-interface, and qunit-mocha-ui delivers QUnit's assertions for mocha. So you could migrate your tests with only little effort.