I just begin using the mongodb stream functionality to stream data directly to the express response.
For that, I use the piece of code that is found on this question:
cursor.stream().pipe(JSONStream.stringify()).pipe(res);
I want to mark response with a 500 status when the cursor returns MongoError. Unfortunately, with this code, the error is returned in JSON with a 200 status.
How can I handle that using simple solutions? Do I have to handle that in the error event of the cursor? If so, how can I tell not to stream directly to express response if an error occurred?
EDIT
I've tried a solution with handling the error event on the stream like this:
var stream = cursor.stream();
stream.on('error', function(err){
res.status(500).send(err.message);
});
stream.pipe(JSONStream.stringify()).pipe(res);
Unfortunately, when an error occurred, I've got an Error: write after end
from express because I've already sent the response in the error event.
How can I flag response with an error status when the cursor-stream failed AFTER I have piped it to response?
WriteStream is ended when ReadStream ends or has an error.
So you need to somehow prevent this default behaviour when errors happen during pipe. You can do that by passing {end: false} as pipe option.
This option alters default behavior so that even if the error occurs, your write stream is still open and you can keep sending more data down (e.g. error status).
var stream = cursor.stream();
stream.on('error', function () {
res.status(500).send(err.message);
});
stream.on('end', function(){
//Pipe does not end the stream automatically for you now
//You have to end it manually
res.end();
});
stream.pipe(res, {end:false}); //Prevent default behaviour
More information can be found on:
https://nodejs.org/dist/latest-v6.x/docs/api/stream.html#stream_readable_pipe_destination_options
Related
I am trying to use different Axios calls to get some data from a remote server. One by one the calls are working but as soons as I call them directly after each other its throwing the error message about the headers. I did some research already and I guess it has sth to do that there the headers of the first call gets in the way of the second call. That is probably a very simplematic description of the problem but I am new to node js and the way those axios calls are working.
This is an example of one of my Api calls:
app.get('/api/ssh/feedback', function(req, res){
conn.on('ready', function(){
try {
let allData = {}
var command = 'docker ps --filter status=running --format "{{.Names}}"'
conn.exec(command, function(err, stream){
if (err) throw console.log(err)
stream.on('data', function(data){
allData = data.toString('utf8').split('\n').filter(e=>e)
return res.json({status: true, info: allData})
})
stream.on('close', function(code){
console.log('Process closed with: ' + code)
conn.end()
})
stream.on('error', function(err){
console.log('Error: ' + err)
conn.end()
})
})
} catch (err) {
console.error('failed with: ' + err)
}
}).connect(connSet)
})
I am using express js as a middleware and the shh2 package to get the connection with the remote server. How I mentioned before the call is working but crashes if it is not the first call. I am able to use the api again after I restart the express server.
This is how I am calling the api through axios in my node js frontend:
getNetworkStatus(e){
e.preventDefault()
axios.get('/api/ssh/network').then(res =>{
if(res.data.status){
this.setState({network_info: 'Running'})
this.setState({network: res.data.info})
} else {
this.setState({network_info: 'No Network Running'})
this.setState({network: 'No Network detected'})
}
}).catch(err => {
alert(err)
})
}
I would be really grateful for any help or advice how to solve this problem. Thanks to everyone who spends some time to help me out.
There are two issues in the code you've provided:
You are making assumptions about 'data' events. In general, never assume the size of the chunks you receive in 'data' events. You might get one byte or you might get 1000 bytes. The event can be called multiple times as chunks are received and this is most likely what is causing the error. Side note: if the command is only outputting text, then you are better off using stream.setEncoding('utf8') (instead of manually calling data.toString('utf8')) as it will take care of multi-byte characters that may be split across chunks.
You are reusing the same connection object. This is a problem because you will continue to add more and more event handlers every time that HTTP endpoint is reached. Move your const conn = ... inside the endpoint handler instead. This could also be causing the error you're getting.
Like the title says...
I use the HTTPS module for nodejs and when making requests I set timeout and attach an error listener.
req.setTimeout(6000, function(){
// mark_completed(true);
this.abort();
})
.on('error', function (e){
if (!this.aborted){
// mark_completed(true);
console.log(e);
}
});
In both scenarios I want to execute a function to mark my request as completed.
Is it safe to assume that on error will always be triggered after after the timeout, so that I can place my function exclusively inside the on error event?
If I understand Node correctly, a listener will only receive events that occur after it's attached. Suppose missing.txt is a missing file. This works:
'use strict';
const fs = require( 'fs' );
var rs = fs.createReadStream( 'missing.txt' );
rs.on('error', (err) => console.log('error ' + err) );
It produces: error Error: ENOENT: no such file or directory, open ...\missing.txt
Why does that work? Changing the fourth line as follows also works:
setTimeout( () => rs.on('error', (err) => console.log('error ' + err)) , 1);
But change the timeout to 5ms, and the error is thrown as an unhandled event.
Am I setting up a race that happens to catch the emitted error if the delay to add the event listener is short enought? Does that mean I really should do an explicit check for the existence of the file before opening it as a stream? But that could create another race, as the Node docs state with respect to fs.exists: "other processes may change the file's state between the two calls."
Moroeover, the event listener is convenient because it will catch other errors.
Is it best practice to just assume that, without introducing an explicit delay, the event listener will be added fast enough to hear an error from attempting to stream a non-existent file?
This error occur when there no such location exists or creating permission are not with user program.
This might be helpful:
var filename = __dirname+req.url;
var readStream = fs.createReadStream(filename);
readStream.on('open', function () {
readStream.pipe(res);
});
readStream.on('error', function(err) {
res.end(err);
});
Why are you listening error on timeout ?
Thanks
Any errors that occur after getting a ReadStream instance from fs.createReadStream() will not be thrown/emitted until at least the next tick. So you can always attach the 'error' listener to the stream after creating it so long as you do so synchronously. Your setTimeout experiment works sometimes because the ReadStream will call this.open() at the end of its constructor. The ReadStream.prototype.open() method calls fs.open() to get a file descriptor from the file path you provided. Since this is also an asynchronous function it means that when you attach the 'error' listener inside a setTimeout you are creating a race condition.
So it comes down to which happens first, fs.open() invoking its callback with an error or your setTimeout() invoking its callback to attach the 'error' listener. It is completely fine to attach your 'error' listener after creating the ReadStream instance, just be sure to do it synchronously and you won't have a problem with race conditions.
I'm trying to determine when a node WriteStream is done writing:
var gfs = GFS(db);
var writeStream = gfs.createWriteStream(
{
filename: "thismyfile.txt",
root: "myfiles"
});
writeStream.on("finish", function()
{
console.log("finished");
response.send({ Success: true });
});
writeStream.write("this might work");
writeStream.end();
console.log("end");
In my console, I see "end", but never "finished", and there is never a response. The stream is writing properly, however, and it seems to be finishing (I see the completed file in the database). That even just isn't firing. I've tried moving the "this might work" into the call to end() and removing write(), I've also tried passing a string into end() as well. That string is written to the stream, but still no callback.
Why might this event not be getting fired?
Thank you.
The gridfs-stream module is designed and written primarily for node 0.8.x and below and does not use the stream2-style methods provided by require('stream').WritableStream in node >= 0.10.x. Because of this, it does not get the standardized finish event. It is up to the module implementation itself to emit finish, which it apparently does not.
I am getting the following error:
events.js:48
throw arguments[1]; // Unhandled 'error' event
^
Error: socket hang up
at createHangUpError (http.js:1091:15)
at Socket.onend (http.js:1154:27)
at TCP.onread (net.js:363:26)
In node v0.6.6, my code has multiple http.request and .get calls.
Please suggest ways to track what causes the socket hang up, and on which request/call it is.
Thank you
Quick and dirty solution for development:
Use longjohn, you get long stack traces that will contain the async operations.
Clean and correct solution:
Technically, in node, whenever you emit an 'error' event and no one listens to it, it will throw. To make it not throw, put a listener on it and handle it yourself. That way you can log the error with more information.
To have one listener for a group of calls you can use domains and also catch other errors on runtime. Make sure each async operation related to http(Server/Client) is in different domain context comparing to the other parts of the code, the domain will automatically listen to the error events and will propagate it to its own handler. So you only listen to that handler and get the error data. You also get more information for free.(Domains are depreceated).
As Mike suggested you can also set NODE_DEBUG=net or use strace. They both provide you what is node doing internally.
Additionally, you can set the NODE_DEBUG environment variable to net to get information about what all the sockets are doing. This way you can isolate which remote resource is resetting the connection.
In addition to ftft1885's answer
http.get(url, function(res)
{
var bodyChunks = [];
res.on('data', function(chunk)
{
// Store data chunks in an array
bodyChunks.push(chunk);
}).on('error', function(e)
{
// Call callback function with the error object which comes from the response
callback(e, null);
}).on('end', function()
{
// Call callback function with the concatenated chunks parsed as a JSON object (for example)
callback(null, JSON.parse(Buffer.concat(bodyChunks)));
});
}).on('error', function(e) {
// Call callback function with the error object which comes from the request
callback(e, null);
});
When I had this "socket hang up" error, it was because I wasn't catching the requests errors.
The callback function could be anything; it all depends on the needs of your application. Here's an exemple of a callback logging data with console.log and logging errors with console.error:
function callback(error, data) {
if (error) {
console.error('Something went wrong!');
console.error(error);
}
else {
console.log('All went fine.');
console.log(data);
}
}
use
req.on('error',function(err){})
Most probably your server socket connection was somehow closed before all http.ServerResponse objects have ended. Make sure that you have stopped all incoming requests before doing something with incoming connections (incomming connection is something different than incoming HTTP request).