I've been looking around at somehow disabling console.log in my application while running unit tests, and I found answers that say you can override the console.log like this:
console.log = function(){};
I tried putting this in app.js, and it overrides console.log when I'm running the app, but not when running unit tests, so I tried adding it the to test file, but then it overrides mocha / chai's console.log, and I get a blank screen.
Is there a way to override the console.log in all files except the one running?
What you would probably want to do instead is use a logging library like Loggly or Bunyan. With these you pass the message you want to log to the client and then you can output those logs based on the environment you are in. In your case you want to log during production but not during testing (kindof odd, but whatever). So you would set process.NODE_ENV to dev or prod accordingly and the logger would take care of the logging for you. Here's an overview of some loggers.
Related
I've looked at the work done with jest-circus and the new Reporter handler: onTestCaseResult but it doesn't give me what I need. I want to capture the console logging for each test case to allow for better analysis of errors when running large test suites against other people's API implementations. At present the console logs are only available on the TestResult object in the onTestResult handler but I would need it in the TestCaseResult object.
Thanks
When launched through the intern-runner command, my tests are still hanging--intern-runner never exits to give me a report and I can tell that the proxy server is still running on port 9000. The browser I specified through my config just remains open (and no, I did not set leaveRemoteOpen to true). I added some debug to lib/reporters/webdriver.js, because I saw that's what logged the "Tests complete" message. I could see that the topic.publish('/client/end') code was invoked, but nothing ever responded to this event. Doesn't lib/ClientSuite subscribe to this topic? From that module:
topic.subscribe('/client/end', function (sessionId) {
console.log("subscribed to '/client/end' for session", sessionId);
if (sessionId === remote.session.sessionId) {
clearHandles();
// get about:blank to always collect code coverage data from the page in case it is
// navigated away later by some other process; this happens during self-testing when
// the new Leadfoot library takes over
remote.setHeartbeatInterval(0).get('about:blank').then(lang.hitch(dfd, 'resolve'));
}
})
But nothing ever happens, and I don't see my console.log() output. Sorry if I am bringing up things that are red herrings, but I just wanted to do some initial investigation first.
All I want is for my test to end and my JUnit and LCOV reports generated! :( What could be going wrong?
And note: no error messages are logged to the command terminal from which I invoked intern-runner config=unittest/myInternConfig. No errors (obvious ones at least) appear in terminal where Selenium server is running.
Update 03/15/15: I added this info in my last comment, but maybe comments get lost in the shuffle on Stackoverflow. In our legacy DOH tests, we used Sinon to fake a server so as to not make real I/O requests to the backend server in unittests. I didn't see a problem with keeping this in the Intern tests, but apparently, there is. When I disabled the test modules that just do
var server = sinon.fakeServer.create();
(well, that, in addition to calling server.respondWith() and server.respond())
intern-runner completed, I got my reports, and etc. Then I searched for "intern with sinon" and stumbled upon https://github.com/theintern/intern/issues/47, where jason0x43 linked to his Sinon-with-Intern code at https://github.com/theintern/intern/blob/sinon/sinon.js. So, I found that very helpful--it seems that in my situation, Sinon's FakeXMLHttpRequest was ALSO faking requests to Intern's proxy server, and that was what was hanging the process.
So, after pretty much using jason0x43's sinon.js code to filter out the "real request," I re-enabled the problematic test modules, re-ran, and everything worked beautifully.
Again, no errors or any sort of warnings reported in terminal or browser console--it would be great if there could be some sort of head's up about this pitfall. Even if just in a Readme file.
(I also edited my original post to add this info.) In our legacy DOH tests, we used Sinon to fake a server so as to not make real I/O requests to the backend server in unittests. I didn't see a problem with keeping this in the Intern tests, but apparently, there is. When I disabled the test modules that just do
var server = sinon.fakeServer.create();
(well, that, in addition to calling server.respondWith() and server.respond())
intern-runner completed, I got my reports, and etc. Then I searched for "intern with sinon" and stumbled upon https://github.com/theintern/intern/issues/47, where jason0x43 linked to his Sinon-with-Intern code at https://github.com/theintern/intern/blob/sinon/sinon.js. So, I found that very helpful--it seems that in my situation, Sinon's FakeXMLHttpRequest was ALSO faking requests to Intern's proxy server, and that was what was hanging the process.
So, after pretty much using jason0x43's sinon.js code to filter out the "real request," I re-enabled the problematic test modules, re-ran, and everything worked beautifully.
Again, no errors or any sort of warnings reported in terminal or browser console--it would be great if there could be some sort of head's up about this pitfall. Even if just in a Readme file.
I have a Node.js Express REST API app that works. Good.
I have a Mocha/Chai/Supertest mock that tests the API app above. Good.
But I have to start the app and then independently run the mock test.
How can I run a single grunt command that starts the API app, let's it get up and going, and then runs the mock test?
Or do I need to run the API app in some kind of test mode (via env var) and have test-only logic somehow invoke the mock test?
I can try some things and get something to work, but what is the good way? (Avoiding overused phrase 'best practice'.)
You can do that with grunt-express-server and grunt-mocha-test, you will juste have to setup your task like below :
grunt.registerTask('test', ['express:test', 'mochaTest']);
This will run your express server with the config you have set for the test environement then run mocha when you run grunt test.
Since you are using supertest I suppose you are doing functionnal testing which means that you will be using the same database for developement and testing (if you are not mocking something). That can be time loosing and make your test fail because of bad data. Using two different environement makes sure of the state of your data when you are running the test.
You can still use grunt watch plugins to relaunch your test on file change if you don't want to have to do it manually.
Hope this helps
Okay, I'm new to node, and really only just using the node server to serve static js, but I can't find any info on this anywhere.
I'm running an application ember app kit, which gets built to a node server.js for deploy, and heroku runs it with node server.js.
It uses grunt for building, testing, etc.
I'd like to know how I can specify configuration variables (i.e. authentication tokens) that can be overridden by heroku config variables.
The closest I've been able to get is a custom task that reads environment variables and writes out a json file that gets built into the site (and assigned to a global var). This works locally, but doesn't take into account heroku configs.
I even wrote a deploy script that gets heroku's configs, exports them as environment variables locally, and does the build--Which works, but the configs only get updated on app deploy. So if I do a heroku config:add CONFIG_TEST=test_value, my app doesn't see that value for CONFIG_TEST until the next time I deploy the app.
I'd like for my app to start embedding that config value in the browser JS immediately.
Any way to do this with node the way my app is set up?
I am not sure I understand what's wrong with simply taking config variables, at run time, from the environment. Use process.env.KEY in your code, and embed that result into whatever template you may have, and serve that as the result.
When you change Heroku config variables your process gets restarted, so it picks up the new values.
Is the problem the fact that you serve static files? If so -- can you simply change it so that you use a template engine to do some processing on them before serving?
OK, here's a solution for ember-app-kit using grunt-sed.
In EMBER_APP_KIT_PROJECT/tasks/options/sed.js
Add something like
module.exports = {
version: {
path: "./dist/",
pattern: '{{env.API_BASE_PATH}}',
replacement: function(){
return process.env.API_BASE_PATH;
},
recursive: true
}
};
then in your code just put
"{{env.API_BASE_PATH}}"
Now, when you run
$ grunt sed
it will replace "{{env.API_BASE_PATH}}" with whatever's in the environment variable.
I am working on an AngularJS application that is delivered by a SocketStream/node.js server.
I have an AngularJS service that calls api functions on the SocketStream server and progress has been good so far.
But now the time has come to start writing the first tests and the first testing framework that came to mind is Karma/Jasmine, since this is the recommend AngularJS set up.
So far so good, but since my AngularJS modules are imported using 'require' (SocketStream's version, not require.js) and server api calls are part of the test, I need to configure Karma to load SocketStream (at least its client side).
I took a good look at 'https://github.com/yiwang/angular-phonecat-livescript-socketstream' but when I run this example I get run time errors, possibly because I have later versions of variuous dependencies installed.
I managed to get 'required' resolved by packing my SocketStream app by adding 'ss.client.packAssets()' to app.js and run 'SS_PACK=1 node app.js', but when I start karma it logs an error message saying:
'Chrome 23.0 (Linux) ERROR
Uncaught TypeError: undefined is not a function
at /the...path/client/static/assets/app/1368026081351.js:25'
'1368026081351.js' is the SocketStream packed assets file. If I don't load it the error message is something like 'require is undefined', so my best guess is that the error is happening somewhere inside the SocketStream require code. Also because I run karma in DEBUG mode and can see all the files being served.
I have been trying different approaches as to find out what is happening but to now avail. So my questions are:
Is anybody else successfully testing AngularJS/SocketStream using Karma?
Does anybody have any suggestions as to how I can fix, or at least debug this problem?
Are there any alternatives/better solutions?
Time to answer, sort of, my own question:
Sort of, because I came to the conclusion that Karma and node.js/SocketStream have a lot of overlap, so I decided to see if I can omit Karma altogether and deliver the Jasmine testing platform through SocketStream. It turns out that that is possible and here's how I did it:
I defined a new SocketStream route and client in my 'app.js' file:
ss.client.define( 'test', {
view: 'SpecRunner.html',
css: ['libs/test'],
code: ['libs', 'tests', 'app'],
tmpl: 'none'
});
ss.http.route( '/test', function(req, res) {
res.serveClient( 'test' );
});
I downloaded jasmine-standalone-1.3.1.zip and copied 'SpecRunner.html' to the 'client/views' folder. I then edited it to make it load AngularJS and all SocketStream client files, like all other views:
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular-resource.min.js"></script>
<SocketStream/>
I removed the 'script' tags that import the sample source files ( 'Player.js' and 'Song.js' ) and specs but let the last 'script' block in place unmodified.
I then created a new folder inside 'client/css/libs' called 'test' and copied 'jasmine.css' in there unmodified.
Then I copied 'jasmine.js' and 'jasmine-html.js' renamed to '01-jasmine.js' and '02-jasmine-html.js' but otherwise unmodified, into '/client/code/libs'.
Now Jasmine is in place and will be invoked by using the '/test' route. The slightly unsatisfactory bit is that I haven't found an elegant place to store my spec files. They only work so far if I place them inside the 'libs' folder. Anywhere else and they are served by SocketStream as modules and are not run.
But I can live with that for now. I can run Jasmine tests without having to configure a special Karma setup.