Like the title says...
I use the HTTPS module for nodejs and when making requests I set timeout and attach an error listener.
req.setTimeout(6000, function(){
// mark_completed(true);
this.abort();
})
.on('error', function (e){
if (!this.aborted){
// mark_completed(true);
console.log(e);
}
});
In both scenarios I want to execute a function to mark my request as completed.
Is it safe to assume that on error will always be triggered after after the timeout, so that I can place my function exclusively inside the on error event?
Related
When using redis client (ioredis or node_redis) inside websocket's message event in a nodejs app, the callback for any command is not immediately fired. (the operation does take place on redis server though)
What is strange is that the callback for the first command will fire after i sent a second message, and the callback for the second will fire after i send a third.
wss.on('connection', (socket, request) => {
socket.on('message', (data) => {
console.log("will send test command")
this.pubClient.hset("test10", "f1","v1", (err,value) => {
//callback not firing first time
console.log("test command reply received")
})
})
}
the redis command is working as expected though in other parts of the app and even when inside the on connection directly like below.
wss.on('connection', (socket, request) => {
console.log("will send test command")
this.pubClient.hset("test10", "f1","v1", (err,value) => {
//callback fires
console.log("test command reply received")
})
socket.on('message', (data) => {})
}
UPDATE:
I had this all wrong. The reason for the weird callback behavior is the result of one my custom Redis modules not returning a reply.
And this seems to have caused all callbacks after this call to seem to have some kind of a one step delay.
I had this all wrong. The reason for the weird callback behavior is the result of one my custom Redis modules not returning a reply. And this seems to have caused all callbacks after this call to seem to have some kind of a one step delay.
If I understand Node correctly, a listener will only receive events that occur after it's attached. Suppose missing.txt is a missing file. This works:
'use strict';
const fs = require( 'fs' );
var rs = fs.createReadStream( 'missing.txt' );
rs.on('error', (err) => console.log('error ' + err) );
It produces: error Error: ENOENT: no such file or directory, open ...\missing.txt
Why does that work? Changing the fourth line as follows also works:
setTimeout( () => rs.on('error', (err) => console.log('error ' + err)) , 1);
But change the timeout to 5ms, and the error is thrown as an unhandled event.
Am I setting up a race that happens to catch the emitted error if the delay to add the event listener is short enought? Does that mean I really should do an explicit check for the existence of the file before opening it as a stream? But that could create another race, as the Node docs state with respect to fs.exists: "other processes may change the file's state between the two calls."
Moroeover, the event listener is convenient because it will catch other errors.
Is it best practice to just assume that, without introducing an explicit delay, the event listener will be added fast enough to hear an error from attempting to stream a non-existent file?
This error occur when there no such location exists or creating permission are not with user program.
This might be helpful:
var filename = __dirname+req.url;
var readStream = fs.createReadStream(filename);
readStream.on('open', function () {
readStream.pipe(res);
});
readStream.on('error', function(err) {
res.end(err);
});
Why are you listening error on timeout ?
Thanks
Any errors that occur after getting a ReadStream instance from fs.createReadStream() will not be thrown/emitted until at least the next tick. So you can always attach the 'error' listener to the stream after creating it so long as you do so synchronously. Your setTimeout experiment works sometimes because the ReadStream will call this.open() at the end of its constructor. The ReadStream.prototype.open() method calls fs.open() to get a file descriptor from the file path you provided. Since this is also an asynchronous function it means that when you attach the 'error' listener inside a setTimeout you are creating a race condition.
So it comes down to which happens first, fs.open() invoking its callback with an error or your setTimeout() invoking its callback to attach the 'error' listener. It is completely fine to attach your 'error' listener after creating the ReadStream instance, just be sure to do it synchronously and you won't have a problem with race conditions.
I want to test the error in a request return. I'm using nock in my tests, how can I force Nock to provoke an error? I want to achieve 100% test coverage and need to test err branch for that
request('/foo', function(err, res) {
if(err) console.log('boom!');
});
Never enter in the if err branch. Even if hit err is a valid response, my Nock line in test looks like this
nock('http://localhost:3000').get('/foo').reply(400);
edit:
thanks to some comments:
I'm trying to mock an error in the request. From node manual:
https://nodejs.org/api/http.html#http_http_request_options_callback
If any error is encountered during the request (be that with DNS resolution, TCP level errors, or actual HTTP parse errors) an 'error' event is emitted on the returned request object
An error code (e.g. 4xx) doesn't define the err variable. I'm trying to mock exactly that, whatever error that defines the err variable and evaluates to true
Use replyWithError.
From the docs:
nock('http://www.google.com')
.get('/cat-poems')
.replyWithError('something awful happened');
When you initialise a http(s) request with request(url, callback), it returns an event emitter instance (along with some custom properties/methods).
As long as you can get your hands on this object (this might require some refactoring or perhaps it might not even be suitable for you) you can make this emitter to emit an error event, thus firing your callback with err being the error you emitted.
The following code snippet demonstrates this.
'use strict';
// Just importing the module
var request = require('request')
// google is now an event emitter that we can emit from!
, google = request('http://google.com', function (err, res) {
console.log(err) // Guess what this will be...?
})
// In the next tick, make the emitter emit an error event
// which will trigger the above callback with err being
// our Error object.
process.nextTick(function () {
google.emit('error', new Error('test'))
})
EDIT
The problem with this approach is that it, in most situations, requires a bit of refactoring. An alternative approach exploits the fact that Node's native modules are cached and reused across the whole application, thus we can modify the http module and Request will see our modifications. The trick is in monkey-patching the http.request() method and injecting our own bit of logic into it.
The following code snippet demonstrates this.
'use strict';
// Just importing the module
var request = require('request')
, http = require('http')
, httpRequest = http.request
// Monkey-patch the http.request method with
// our implementation
http.request = function (opts, cb) {
console.log('ping');
// Call the original implementation of http.request()
var req = httpRequest(opts, cb)
// In next tick, simulate an error in the http module
process.nextTick(function () {
req.emit('error', new Error('you shall not pass!'))
// Prevent Request from waiting for
// this request to finish
req.removeAllListeners('response')
// Properly close the current request
req.end()
})
// We must return this value to keep it
// consistent with original implementation
return req
}
request('http://google.com', function (err) {
console.log(err) // Guess what this will be...?
})
I suspect that Nock does something similar (replacing methods on the http module) so I recommend that you apply this monkey-patch after you have required (and perhaps also configured?) Nock.
Note that it will be your task to make sure you emit the error only when the correct URL is requested (inspecting the opts object) and to restore the original http.request() implementation so that future tests are not affected by your changes.
Posting an updated answer for using nock with request-promise.
Let's assume that your code calls request-promise like this:
require('request-promise')
.get({
url: 'https://google.com/'
})
.catch(res => {
console.error(res);
});
you can set up nock like this to simulate a 500 error:
nock('https://google.com')
.get('/')
.reply(500, 'FAILED!');
Your catch block would log a StatusCodeError object:
{
name: 'StatusCodeError',
statusCode: 500,
message: '500 - "FAILED!"',
error: 'FAILED!',
options: {...},
response: {
body: 'FAILED!',
...
}
}
Your test can then validate that error object.
Looks like you're looking for an exception on a nock request, this maybe can help you:
var nock = require('nock');
var google = nock('http://google.com')
.get('/')
.reply(200, 'Hello from Google!');
try{
google.done();
}
catch (e) {
console.log('boom! -> ' + e); // pass exception object to error handler
}
I have problem with Mocha.
If i run this programmaticaly from Jake Mocha brakes down and don't show nothing more than some errors stuff like:
AssertionError: There is a code 200 in response
at Socket.<anonymous> (/home/X/Y/Z/test/test_Server.js:70:4)
at Socket.EventEmitter.emit (events.js:93:17)
at TCP.onread (net.js:418:51)
Runned from command line gives more expected results. That is:
19 passing (30ms)
7 failing
1) RTDB accepts connection with package and response with code 200 if correct package was send:
Uncaught AssertionError: There is a code 200 in response
at Socket.<anonymous> (/X/Y/Z/test/test_Server.js:70:4)
at Socket.EventEmitter.emit (events.js:93:17)
at TCP.onread (net.js:418:51)
2) XYZ should be able to store GHJ for IJS:
Error: expected f...
...
The problem is following code:
test('accepts connection with package and response with code 400 ' +
'if wrong package was send', function (done) {
console.log('client connecting to server');
var message = '';
var client = net.connect(8122, 'localhost', function () {
client.write('Hello');
client.end();
} );
client.setEncoding('utf8');
client.on('data', function (data) {
message += data;
} );
client.on('end', function (data) {
assert(message.indexOf('400') !== -1, 'There is a code 400 in response');
done();
});
client.on('error', function(e) {
throw new Error('Client error: ' + e);
});
});
If I do
assert(message.indexOf('400') !== -1, 'There is a code 400 in response');
just after
var message = '';
Mocha fails correctly (I mean displaying errors etc.), So this is fault of asynch assertion done on event. How can I correct that? Thats real problem Because this test is first, and I get no clue where to look for source of problem (If there is any). Should I somehow catch this assertion error and pass it to Mocha?
EDIT:
Answer to comment how is Jake running Mocha - just like that:
var Mocha = require('mocha');
...
task("test", [], function() {
// First, you need to instantiate a Mocha instance.
var mocha = new Mocha({
ui: 'tdd',
reporter: 'dot'
});
// Then, you need to use the method "addFile" on the mocha
// object for each file.
var dir = 'test';
fs.readdirSync(dir).filter(function(file){
// Only keep the .js files
return file.substr(-3) === '.js';
}).forEach(function(file){
// Use the method "addFile" to add the file to mocha
mocha.addFile(
path.join(dir, file)
);
});
// Now, you can run the tests.
mocha.run(function(failures){
if(failures){
fail("Mocha test failed");
} else {
complete();
}
});
}, {async: true});
I'm assuming since you say "programmatically" that your Jakefile issues require("mocha") and then creates a Mocha object on which it calls the run method.
If this is the case, then the reason it does not work is because Jake and Mocha are working at cross purposes. When Mocha executes a test, it traps unhandled exceptions. Schematically, (omitting details that are not important) it is something like:
try {
test.run();
}
catch (ex) {
recordFailure();
}
It is at the call to test.run that the test is executed. For tests that are purely synchronous, no problem. When a test is asynchronous, the asynchronous callback which is part of the test cannot execute inside the try... catch block Mocha establishes. The test will launch the asynchronous operation and return immediately. At some point in the future, the asynchronous operation calls the callback. When this happens, Mocha is not able to catch the exception in the asynchronous operation with a try... catch block. How does it catch such exceptions then? It listens to uncaughtException events.
Now, the problem when Mocha is run in the same execution context as Jake is that Jake also wants to trap uncaught exceptions. Jake sometimes has to launch asynchronous operations and wants to trap cases where these operations fail, so it listens to uncaughtException too. It installs its listener first. So when an asynchronous Mocha test fails with a exception, Jake's listener's is called, which cause Jake to immediately stop execution. Mocha never gets a chance to act.
I don't see a clear way to make both Jake and Mocha cooperate when run in the same execution context. There might be a way to fiddle with the handlers but I doubt that there is a robust way to make it work. (By "robust" I mean a way which will ensure that every single error is trapped and attributed to the correct source.) The vm module might help segregate their contexts while keeping them in the same OS process.
Based on this answer: https://stackoverflow.com/a/9132271/2024650
In few words: I remove listener on uncaughtException in Jake. This allow Mocha to handle this uncaughtExceptions. At the end I add back this listener.
This solves my answer for now:
task("test", [], function() {
var originalExeption = process.listeners('uncaughtException').pop();
//!!!in node 0.10.X you should also check if process.removeListener isn't necessary!!!
console.log(originalExeption);
// First, you need to instantiate a Mocha instance.
var mocha = new Mocha({
ui: 'tdd',
reporter: 'dot'
});
// Then, you need to use the method "addFile" on the mocha
// object for each file.
var dir = 'test';
fs.readdirSync(dir).filter(function(file){
// Only keep the .js files
return file.substr(-3) === '.js';
}).forEach(function(file){
// Use the method "addFile" to add the file to mocha
mocha.addFile(
path.join(dir, file)
);
});
// Now, you can run the tests.
mocha.run(function(failures){
if(failures){
fail("Mocha test failed");
} else {
complete();
}
process.listeners('uncaughtException').push(originalExeption);
});
}, {async: true});
it seems that you are testing a HTTP server by connecting with TCP to it, (correct if i'm wrong) if thats the case here, you should just drop your current test, and use appropriate module to test an HTTP server or REST API there are plenty modules like Superagent .
You should try call client.end() after you have received your first data event, this way it will call assert after first header is received.
if you want to continuously send and test requests, have all you assert in the data event and call correct assert each time it receives the header you want to test, just remember that you must call done() when its supposed to finish, and that it can't be delayed for a long period of time, the request must be one after another.
Other than that you can use async module if you want to test chained requests ( one request depends on another and another and so on..), in some case its useful to raise the mocha timeout to more than 10000 (10secs) to give the async part some time to complete.
I am getting the following error:
events.js:48
throw arguments[1]; // Unhandled 'error' event
^
Error: socket hang up
at createHangUpError (http.js:1091:15)
at Socket.onend (http.js:1154:27)
at TCP.onread (net.js:363:26)
In node v0.6.6, my code has multiple http.request and .get calls.
Please suggest ways to track what causes the socket hang up, and on which request/call it is.
Thank you
Quick and dirty solution for development:
Use longjohn, you get long stack traces that will contain the async operations.
Clean and correct solution:
Technically, in node, whenever you emit an 'error' event and no one listens to it, it will throw. To make it not throw, put a listener on it and handle it yourself. That way you can log the error with more information.
To have one listener for a group of calls you can use domains and also catch other errors on runtime. Make sure each async operation related to http(Server/Client) is in different domain context comparing to the other parts of the code, the domain will automatically listen to the error events and will propagate it to its own handler. So you only listen to that handler and get the error data. You also get more information for free.(Domains are depreceated).
As Mike suggested you can also set NODE_DEBUG=net or use strace. They both provide you what is node doing internally.
Additionally, you can set the NODE_DEBUG environment variable to net to get information about what all the sockets are doing. This way you can isolate which remote resource is resetting the connection.
In addition to ftft1885's answer
http.get(url, function(res)
{
var bodyChunks = [];
res.on('data', function(chunk)
{
// Store data chunks in an array
bodyChunks.push(chunk);
}).on('error', function(e)
{
// Call callback function with the error object which comes from the response
callback(e, null);
}).on('end', function()
{
// Call callback function with the concatenated chunks parsed as a JSON object (for example)
callback(null, JSON.parse(Buffer.concat(bodyChunks)));
});
}).on('error', function(e) {
// Call callback function with the error object which comes from the request
callback(e, null);
});
When I had this "socket hang up" error, it was because I wasn't catching the requests errors.
The callback function could be anything; it all depends on the needs of your application. Here's an exemple of a callback logging data with console.log and logging errors with console.error:
function callback(error, data) {
if (error) {
console.error('Something went wrong!');
console.error(error);
}
else {
console.log('All went fine.');
console.log(data);
}
}
use
req.on('error',function(err){})
Most probably your server socket connection was somehow closed before all http.ServerResponse objects have ended. Make sure that you have stopped all incoming requests before doing something with incoming connections (incomming connection is something different than incoming HTTP request).