Hooking up protractor E2E tests with node-replay - node.js

I've been messing around with node-replay (https://github.com/assaf/node-replay) to see if there is a way I can hook it up with my protractor tests to get my tests to run with recorded data (so they run quicker and not so damn slow).
I installed node-replay as instructed on the github page. Then in my test file I include some node replay code as follow
describe('E2E: Checking Initial Content', function(){
'use strict';
var ptor;
var Replay = require('replay');
Replay.localhost('127.0.0.1:9000/');
// keep track of the protractor instance
beforeEach(function(){
browser.get('http://127.0.0.1:9000/');
ptor = protractor.getInstance();
});
and my config file looks like this:
exports.config = {
seleniumAddress: 'http://0.0.0.0:4444/wd/hub',
// Capabilities to be passed to the webdriver instance.
capabilities: {
'browserName': 'chrome'
},
// Spec patterns are relatie to the current working directly when
// protractor is called.
specs: ['test/e2e/**/*.spec.js'],
// Options to be passed to Jasmine-node.
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 300000
}
};
Then I try to rub my tests with grunt by saying
REPLAY=record grunt protractor
But I get tons of failures. Grunt protractor was running all of tests fine and with no failures before I added node-replay so maybe my logic is flawed in how to connect these two together. Any suggestions as to what I'm missing
1) E2E: Sample test 1
Message:
UnknownError:
Stacktrace:
UnknownError:
at <anonymous>

Problem is that http requests to 127.0.0.1:9000 are done by the Browser, not within your NodeJS Protractor code, so replay won't work in this infrastructure scenario.
There is ongoing discussion on Protractor Tests without a Backend here and some folks relies on mocking the backend client side with Protractor's addMockModule in a similar way they already do for Karma unit tests.
Personally I don't agree with mocking for e2e since the whole point of end-to-end was to test the whole real app.
HTTP replay may not be such a bad idea to get things go faster.
Ideally what i hoped to find was a tool that works like this:
Run a proxy capture http server the first time for later replay:
capture 127.0.0.1:9000 --into-port 3333
Run your e2e tests against a baseUrl = '127.0.0.1:3333';. All requests/responses will be cached/saved.
Serve the cached content from now on:
replay --at-port 3333
Run your e2e tests again still on baseUrl por 3333. This time it should run faster since it's serving cached content.
Couldn't find it, let me know if you have better luck!

Related

How can I run some code in Node prior to running a browser test with Intern?

With Intern, how can I run some setup code in Node prior to running browser tests, but not when running Node tests? I know that I could do that outside of Intern completely, but is there anything that's a part of Intern that could handle that?
For a more concrete example: I'm running tests for an HTTP library that communicates with a Python server. When running in Node, I can run spawn("python", ["app.py"]) to start the server. However, in the browser, I would need to run that command before the browser begins running the tests.
Phrased another way: is there a built-in way with Intern to run some code in the Node process prior to launching the browser tests?
By default, Intern will run the plugins configured for node regardless of which environment you're running in.
So, you could create a plugin that hooks into the runStart and runEnd events like this:
intern.on("runStart", () => {
console.log("Starting...");
// Setup code here
});
intern.on("runEnd", () => {
console.log("Ending...");
// Teardown code here
});
These handlers will run inside the Node process, and thus have access to all the available Node APIs.
Additionally, you can detect which environments are being tested by looking at intern.config.environments:
{
environments: [
{
browserName: 'chrome',
browserVersion: undefined,
version: undefined
}
]
}
By looking at the environments, you can determine whether or not you need to run your setup code.

How to collect code coverage with Istanbul when executing HTTP endpoints via Postman or Karate

I have a JS project that provides a set of endpoints leveraging Express with a typical express/router pattern.
const express = require('express');
const router = new express.Router();
router.post('/', async (req, res, next) => { });
router.get('/:abc', async (req, res, next) => { });
module.exports = router;
I can successfully start the server with npm start which calls node ./src/index.js and makes the endpoints available at https://localhost:8080
I can also successfully test these endpoints utilizing a tool like Postman or automation like Karate.
The problem i'm having is that I can't seem to collect code coverage using Istanbul when exercising the product source JS through http://localhost:8080.
I've tried npm start followed by nyc --all src/**/*.js gradlew test. The latter being automation that tests the endpoints. This results in 0% coverage which i'm assuming was due to not running nyc with npm start.
Next I tried nyc --all src/**/*.js npm start and noticed some coverage, but this was just coverage from starting the Express server.
Next I tried nyc --all src/**/*.js npm start followed by gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried putting the prior two commands into a single JS script(myscript.js) running each asynchronously wherein the Express server was started before the gradle tests started running and ran nyc --all src/**/*.js myscript.js. The results from this were the same as my previous trial wherein only npm start received code coverage.
Next I tried nyc --all src/**/*.js npm start followed by nyc --all src/**/*.js -no-clean gradlew test and noticed the code coverage results were the same as when no endpoint tests were run.
Next I tried all of the attempts above by wrapping them into package.json scripts and running npm run <scriptName> getting the same exact behavior.
Finally I tried nyc instrument src instrumented/src --compact=false followed by npm run start:coverage wherein this start:coverage script calls the instrumented index.js at node ./instrumented/src/index.js followed by gradlew test followed by nyc report --reporter=lcov. This attempt also failing to produce any additional code coverage from the gradlew endpoint tests.
Doing some research online I came across this post
How do I setup code coverage on my Express based API?
And thought this looks eerily similar to my problems. Eg Istanbul doesn't know how to cover code when exercising the code through executing endpoints.
I decided to still post this as the above post is quite a bit old and wanted to get opinions and see if there is a better solution than
https://github.com/gotwarlost/istanbul-middleware
EDIT
Adding more specifics about how we start the Express server and run automation without Istanbul today. Just to clarify what we're working with and automation tools we're invested in. (Mainly Karate and Java)
/*
calls --> node -r dotenv/config src/index.js
*/
npm start
/*
calls --> gradlew clean test
this effectively calls a tool called Karate
Karate's base url is pointed to: https://locahost:8080
Karate tests execute endpoints on that base url
This would be akin to using Postman however Karate has quite a bit of configuration options
https://github.com/intuit/karate
*/
npm test
Through many hours of investigation we've managed to solve this. Prior project posted by #balexandre has been updated to illustrate how to do this.
https://github.com/kirksl/karate-istanbul
As said on the comments, you never start your server to run the tests... the tests will point to your server when you require the server file.
in my example, I'm running mocha with chai and the chai-http package helps to call the server
server.js
const app = require("express")();
// everything else ...
exports.server = app;
in your end-to-end tests, you can easily have:
const chai = require('chai');
const chaiHttp = require('chai-http');
chai.use(chaiHttp);
const server = require("./server.js").server;
...
it("should calculate the circumference", done => {
chai
.request(server) // <-- attach your server here
.get('/v1/circumference/10')
.end((err, res) => {
expect(res.status).to.be.eql(200);
expect(res.type).to.be.eql('application/json');
expect(res.body.result).to.eql(62.83185307179586);
done();
});
});
});
I've made a very simple project and pushed to GitHub so you can checkout and run everything, in order to see how all work together
GitHub Project
Added
I've added a route so it can show the coverage report (I used the html report) and created a static route to it ...
when you run the coverage npm run coverage it will generate the report inside ./report folder and a simple express route pointed to that folder, will enable one to see it as an endpoint.
commit info for such change

How to get coverage report for external APIs?

I'm trying to get coverage report for the API code. I have all the test cases running perfectly in mocha. My problem is that my API server and the test cases are written in separate repositories.
I start my node API server on localhost, on a particular port, and then using supertest in mocha, hit the localhost url to test server's response.
Can you suggest me the best way to generate a coverage report for those APIs?
Testing env
If you want to get coverage, supertest should be able to bootstrap the app server, like in the express example.
The drawback is that you must not run your tests against a running server, like
var api = request('http://127.0.0.1:8080');
but you must include your app entrypoint to allow supertest to start it like
var app = require('../yourapp');
var api = request(app);
Of course, this may (or may not) result in a bit of refactoring on your app bootstrap process.
As other options, you can use node CLI debug capabilities or use node-inspector.
Coverage setup
Supposing you are willing to install istanbul in association with mocha to get coverage.
npm install -g istanbul
then
istanbul cover mocha --root <path> -- --recursive <test-path>
cover is the command use to generate code coverage
mocha is the executable js file used to run tests
--root <path> the root path to look for files to instrument (aka the "source files")
-- is used to pass arguments to your test runner
--recursive <test-path> the root path to look for test files
You can then add --include-all-sources to get cover info on all your source files.
In addition you can get more help running
istanbul help cover

Running w/ intern-runner: nothing outputted to terminal, no code coverage data

I am launching my Intern-based tests through the intern-runner script, like this:
<full_path>\intern\.bin\intern-runner config=unittest/intern
My unittest\intern.js configuration file contains the following:
define({
reporters: [ "junit", "console", "lcovhtml", "runner" ],
excludeInstrumentation: /(?:dojo|intern|istanbul|reporters|unittest)(?:\\|\/)/,
suites: [ "unittest/all_intern.js" ],
forDebug: console.log("Customized intern config for test runner loaded successfully!"),
loader: {
packages: [
{ name: 'resources', location: 'abc/resources' },
{ name: 'stats', location: 'abc/resources/stats' },
{ name: 'nls', location: 'abc/nls' },
{ name: 'widgets', location: 'abc/widgets' },
{ name: 'views', location: 'abc/views' },
]
},
useLoader: {
'host-browser': 'node_modules/dojo/dojo.js'
},
tunnel: 'NullTunnel',
useSauceConnect: false,
webdriver: {
host: 'localhost',
port: 4444
},
proxyUrl: "http://localhost:8010/",
environments: [
{
browserName: 'chrome'
}
]
});
Output to the terminal/command window looks hopeful:
Customized intern config for test runner loaded successfully!
Listening on 0.0.0.0:9000
Starting tunnel...
Initialised chrome 40.0.2214.111 on XP
And the Chrome browser is indeed launched, and I see my unittests running and passing in the browser contents. However, control never goes back to the terminal/command window--I don't see anything like "634/634 tests pass" or whatever, and I have to Ctrl+C to kill the intern-runner process. And of course, no code coverage files are generated. Is this due perhaps to my file structure? The Intern files are in a completely separate directory from these unit tests--I am not invoking intern-runner from a common parent directory for both Intern libraries and unit test files (and the product files they are testing).
I can create a diagram to illustrate the file/directory structure, if that is important. Note that I did change the Intern structure a bit, like:
<Dir_123>\intern\intern-2.2.2\bin\intern-runner.js
<Dir_123>\intern\intern-2.2.2\lib\<all_the_usual>
<Dir_123>\intern\intern-2.2.2\node_modules\<all_the_usual>
<Dir_123>\intern\.bin\intern-runner.cmd
i.e., what I had changed was to insert an extra "intern-2.2.2" directory after "intern", and the ".bin" directory containing intern-runner.cmd is a peer of "intern-2.2.2". Hope this is not confusing. :(
And note that the "proxyUrl" config property represents the URL that the unittest files and product files are available from the web server. Am I doing this right, by configuring the proxyUrl for this purpose? If I omit it, nothing runs because the default used is localhost:9000. I see in the "Configuring Intern" article on Github that proxyUrl is "the URL to the instrumentation proxy," but I don't really understand what that means.
It looks like you're making pretty good progress. Your directory structure is a bit non-standard (any particular reason for that?), but that shouldn't be a show-stopper. The problem you're seeing is probably due to a proxy misconfiguration. Intern is loading the test client and your unit tests, but the code in the browser is unable to communicate test results back to Intern.
As you mentioned, the proxyUrl parameter is the URL at which Intern's instrumenting proxy can be found. The "instrumenting proxy" is basically just an HTTP server than Intern runs to serve test files and to receive information from browsers under test. (It also instruments JS files as it serves them to gather code coverage data, hence the "instrumenting" part of the name.) By default, it's at localhost:9000. That means a browser under test running on localhost can GET or POST to localhost:9000 to talk to Intern.
You can also run Intern behind another server, like nginx, and have that server proxy requests to Intern. In that case, you need to 1) set Intern's proxyUrl to the address of the proxying server, and 2) setup proxying rules in the server to pass requests back to Intern at localhost:9000.
Intern also has a proxyPort parameter to control the port the instrumenting proxy serves on. The proxy listens at localhost:<proxyPort>, where proxyPort defaults to 9000. If tests are talking to Intern's proxy directly (with no intermediate nginx or Apache or anything), proxyPort will be the same as the port in proxyUrl. If an intermediate server is being used, the two can have different values.
When intern-runner runs unit tests, it tells the test browser to GET <proxyUrl>/client.html?config=.... Since you have some external server running and you've set proxyUrl to that server's address, that server will serve client.html and the other relevant Intern files, allowing the unit tests to run. However, when the unit tests are finished and the browser attempts to communicate this back to Intern at proxyUrl, it's going to fail unless you've configured the external server to proxy requests back to localhost:<proxyPort>.

using require() in karma tests

I'm using Karma 0.12.28, Karma-Requirejs 0.2.2 (and Karam-jasmine) to run tests.
Config is almost identical to one from docs (test files in requirejs deps and window.__ karma __.start as callback).
Everything works just fine for basic tests. The problem starts when I use require() instead of define() or try to change context. Basically something like this:
var ctx = require.config({
context: 'my-context'
});
ctx(['dep1'], function(){
//....
});
The problem is dep1 fails to load. In DevTools I can see <script/> is created and I can see request in network tab with proper URL but status is canceled. I can open this URL using context menu so I'm sure it is correct but the question remains - why can't I use require() in karma tests?

Resources