How to handle undefined upload in NodeJS - node.js

I use BusBoy plugin to handle file upload. It registers a handler for event 'file' which is
happening when the file has been uploaded completely,then the handler will do the next operation. But when the upload file is undefined,event handler is registered but event 'file' never happens. It will take along time for process to stop handling this request. When I keep on doing such thing for more than five times,the node process will halt for several minutes, it will come to normal after a few minutes.
Then how can I handle this situation in NodeJS process?
Don't tell me never to upload undefined, when file upload failed, I think such situation will also take place.

you can use 'finish' event to stop processing
busboy.on('finish', function() {
//your code to stop processing
});
This 'finish' event will trigger when all files/fields events in the uploading (form) data have been triggered. (Trigger is ignored if file/field is undefined).
further it is recommended to recheck in your case to pipe your node request through busboy
req.pipe(busboy);

I have solved this problem through this.
Set a variable flag for 'file' event, when the event happened the flag will be set true in its handler. Because 'file' event happens before req.end event,so I register a handler for the end event of req.
req.on('end',function(){
if(!flag){
res.json({
....
});
}
});

Related

Unexpected Node.js program flow

I am new to node.js and working through the API. In the stream module docs I came across this example of the "unpipe event" (actually a fusion of two examples in the docs).
const fs = require("fs);
const writable = fs.createWriteStream("write.txt");
const readable = fs.createReadStream("read.txt");
readable.pipe(writable);
setTimeout(function(){
console.log("Stop writing to file.txt");
readable.unpipe(writable);
console.log("Manually close the file stream");
writable.end();
}, 0);
writable.on("unpipe", function(src){
console.log("Something has stopped piping into the writer");
});
I can't understand the following console.log order:
"Stop writing to file.txt"
"Something has stopped piping into the writer"
"Manually close the file stream"
Given the setTimeout callback is running - which is the first phase of the event loop as I understand - how on earth does the callback for the "unpipe" event start to run before the setTimeout callback has finished.
Originally I had the setTimeout firing after a time above zero seconds, however I was finding that the unpipe call back was always called first. I reasoned that my computer was reading the file always first before the setTimeout was ready. (Although I can't see any mention in the docs about the completion of the write to the file eliciting the "unpipe" event, but this makes sense I suppose). However I can't for the life of me reason how the above program flow is occurring. Thanks in advance for any help.
As specified by the node.js documentation:
The EventEmitter calls all listeners synchronously in the order in which they were registered.
That is, when .emit is called, it synchronously runs through all listeners for the emitted event and calls them.
Note that if necessary you can wrap your callback code in process.nextTick to ensure that it will always run asynchronously, but in your case it's likely that's unnecessary.
Also the source of the call to .emit (the emission of the event) will often be asynchronous.

Why is fs.createReadStream ... pipe(res) locking the read file?

I'm using express to stream audio & video files according to this answer. Relevant code looks like this:
function streamMedia(filePath, req, res) {
// code here to determine which bytes to send, compute response headers, etc.
res.writeHead(status, headers);
var stream = fs.createReadStream(filePath, { start, end })
.on('open', function() {
stream.pipe(res);
})
.on('error', function(err) {
res.end(err);
})
;
}
This works just fine to stream bytes to <audio> and <video> elements on the client. However after these requests are served, another express request can delete the file being streamed from the filesystem. This second request is failing, sort of.
What happens is that as long as the file is streamed at least once (meaning a createReadStream was invoked for the file's path while running the code above), then a different express request comes in to delete the file, the file remains on the filesystem until express is stopped. As soon as express is stopped, the files are deleted from the filesystem.
What exactly is going on here? Is it fs or express that is locking the file, why, and how can I get the process to release the file so that it can be deleted (after its contents have been read and piped to a response, if any is pending)?
Update 1:
I've modified the above code to set autoClose: true for the second function arg, and added both 'end' and 'close' event handlers, like so:
res.writeHead(status, headers);
var streamReadOpts = { start: start, end: end, autoClose: true };
var stream = fs.createReadStream(filePath, streamReadOpts)
// previous 'open' & 'error' event handlers are still here
.on('end', function () {
console.log('stream end');
})
.on('close', function () {
console.log('stream close');
})
What I have discovered is that when a page initially loads with a <video> or <audio> element, only the 'open' even is fired. Then when the user clicks to play the video/audio, a second request is made, and this second time, both the 'end' and 'close' events fire, and subsequently deleting the file succeeds.
So it appears that the file is being locked when a user loads the page that has the <video> or <audio> element that gets its source from the request that calls this function. It isn't until that media file is played that a second request is made, and the file is unlocked.
I've also discovered that closing the browser also causes the 'end' and 'close' events to fire, and the file to be unlocked. My guess is that I'm doing something wrong with the express res to make it not close properly, but I'm still not sure what that could be.
It turned out the solution to this was to read and pipe smaller blocks of data from the file during each request. In my test cases for this, I was streaming a 6MB MP4 video file. Though I was able to reproduce the issue using either firefox or chrome, I debugged using the latter, and found that the client was blocking the stream.
When the page initially loads, there is an element that looks something like this:
<video> <!-- or <audio> -->
<source src="/path/to/express/request" type="video/mpeg" /> <!-- or audio/mpeg -->
</video> <!-- or </audio> -->
As is documented in the other answer referenced in the OP, chrome will send a request with a range header like so:
Range:bytes=0-
For this request, my function was sending the whole file, and my response looked like this:
Accept-Ranges:bytes
Connection:keep-alive
Content-Length:6070289
Content-Range:bytes 0-6070288/6070289
Content-Type:video/mp4
However, chrome was not reading the whole stream. It was only reading the first 3-4MB, then blocking the connection until a user action caused it to need the rest of the file. This explains why closing either the browser or stopping express caused the files to be unlocked, because it closed the connection from either the browser or the server's end.
My current solution is to only send a maximum of 1MB (the old school 1MB, 1024 * 1024) chunk at a time. The relevant code can be found in an additional answer to the question referenced in the OP.
Set autoClose = true in options. If autoClose = false you have to close it manually in 'end' event.
Refer node doc :- https://nodejs.org/api/fs.html#fs_fs_createreadstream_path_options

readable.on('end',...) is never fired

I am trying to stream some audio to my server and then stream it to a service specified by the user, the user will be providing me with someHostName, which can sometimes not support that type of request.
My problem is that when it happens the clientRequest.on('end',..) is never fired, I think it's because it's being piped to someHostReq which gets messed up when someHostName is "wrong".
My question is:
Is there anyway that I can still have clientRequest.on('end',..) fired even when the stream clientRequest pipes to has something wrong with it?
If not: how do I detect that something wrong happened with someHostReq "immediately"? someHostReq.on('error') doesn't fire up except after some time.
code:
someHostName = 'somexample.com'
function checkIfPaused(request){//every 1 second check .isPaused
console.log(request.isPaused()+'>>>>');
setTimeout(function(){checkIfPaused(request)},1000);
}
router.post('/', function (clientRequest, clientResponse) {
clientRequest.on('data', function (chunk) {
console.log('pushing data');
});
clientRequest.on('end', function () {//when done streaming audio
console.log('im at the end');
}); //end clientRequest.on('end',)
options = {
hostname: someHostName, method: 'POST', headers: {'Transfer-Encoding': 'chunked'}
};
var someHostReq = http.request(options, function(res){
var data = ''
someHostReq.on('data',function(chunk){data+=chunk;});
someHostReq.on('end',function(){
console.log('someHostReq.end is called');
});
});
clientRequest.pipe(someHostReq);
checkIfPaused(clientRequest);
});
output:
in the case of a correct hostname:
pushing data
.
.
pushing data
false>>>
pushing data
.
.
pushing data
pushing data
false>>>
pushing data
.
.
pushing data
console.log('im at the end');
true>>>
//continues to be true, that's fine
in the case of a wrong host name:
pushing data
.
.
pushing data
false>>>>
pushing data
.
.
pushing data
pushing data
false>>>>
pushing data
.
.
pushing data
true>>>>
true>>>>
true>>>>
//it stays true and clientRequest.on('end') is never called
//even tho the client is still streaming data, no more "pushing data" appears
if you think my question is a duplicate:
it's not the same as this: node.js http.request event flow - where did my END event go? , the OP was just making a GET instead of a POST
it's not the same as this: My http.createserver in node.js doesn't work? , the stream was in paused mode because the none of the following happened:
You can switch to flowing mode by doing any of the following:
Adding a 'data' event handler to listen for data.
Calling the resume() method to explicitly open the flow.
Calling the pipe() method to send the data to a Writable.
source: https://nodejs.org/api/stream.html#stream_class_stream_readable
it's not the same as this: Node.js response from http request not calling 'end' event without including 'data' event , he just forgot to add the .on('data',..)
The behaviour in case of a wrong host name seems some problem with buffers, if the destination stream buffer is full (because someHost is not getting the sended chunks of data) the pipe will not continue to read the origin stream because pipe automatically manage the flow. As pipe is not reading the origin stream you never reach 'end' event.
Is there anyway that I can still have clientRequest.on('end',..) fired
even when the stream clientRequest pipes to has something wrong with
it?
The 'end' event will not fire unless the data is completely consumed. To get 'end' fired with a paused stream you need to call resume() (unpiping first from wrong hostname or you will fall in buffer stuck again) to set the steam into flowMode again or read() to the end.
But how to detect when I should do any of the above?
someHostReq.on('error') is the natural place but if it takes too long to fire up:
First try to set a low timeout request (less than someHostReq.on('error') takes to trigger, as seems too much time for you) request.setTimeout(timeout[, callback]) and check if it doesn't fail when correct hostname. If works, just use the callback or timeout event to detect when the server timeOut and use one of the techniques above to reach to the end.
If timeOut solution fails or doesn't fits your requirements you have to play with flags in clientRequest.on('data'), clientRequest.on('end') and/or clienteRequest.isPaused to guess when you are stuck by the buffer. When you think you are stuck just apply one of the techniques above to reach to the end of the stream. Luckily it takes less time to detect buffer stuck than wait for someHostReq.on('error') (maybe two request.isPaused() = true without reach 'data' event is enought to determine if you are stuck).
How do I detect that something wrong happened with someHostReq
"immediately"? someHostReq.on('error') doesn't fire up except after
some time.
Errors triggers when triggers. You can not "immediately" detect it. ¿Why not just send a prove beacon request to check support before piping streams? Some kind of:
"Cheking service specified by the user..." If OK -> Pipe user request stream to service OR FAIL -> Notify user about wrong service.

Node.js: Will node always wait for setTimeout() to complete before exiting?

Consider:
node -e "setTimeout(function() {console.log('abc'); }, 2000);"
This will actually wait for the timeout to fire before the program exits.
I am basically wondering if this means that node is intended to wait for all timeouts to complete before quitting.
Here is my situation. My client has a node.js server he's gonna run from Windows with a Shortcut icon. If the node app encounters an exceptional condition, it will typically instantly exit, not leaving enough time to see in the console what the error was, and this is bad.
My approach is to wrap the entire program with a try catch, so now it looks like this: try { (function () { ... })(); } catch (e) { console.log("EXCEPTION CAUGHT:", e); }, but of course this will also cause the program to immediately exit.
So at this point I want to leave about 10 seconds for the user to take a peek or screenshot of the exception before it quits.
I figure I should just use blocking sleep() through the npm module, but I discovered in testing that setting a timeout also seems to work. (i.e. why bother with a module if something builtin works?) I guess the significance of this isn't big, but I'm just curious about whether it is specified somewhere that node will actually wait for all timeouts to complete before quitting, so that I can feel safe doing this.
In general, node will wait for all timeouts to fire before quitting normally. Calling process.exit() will exit before the timeouts.
The details are part of libuv, but the documentation makes a vague comment about it:
http://nodejs.org/api/all.html#all_ref
you can call ref() to explicitly request the timer hold the program open
Putting all of the facts together, setTimeout by default is designed to hold the event loop open (so if that's the only thing pending, the program will wait). You can programmatically disable or re-enable the behavior.
Late answer, but a definite yes - Nodejs will wait around for setTimeout to finish - see this documentation. Coincidentally, there is also a way to not wait around for setTimeout, and that is by calling unref on the object returned from setTimeout or setInterval.
To summarize: if you want Nodejs to wait until the timeout has been called, there's nothing you need to do. If you want Nodejs to not wait for a particular timeout, call unref on it.
If node didn't wait for all setTimeout or setInterval calls to complete, you wouldn't be able to use them in simple scripts.
Once you tell node to listen for an event, as with the setTimeout or some async I/O call, the event loop will loop until it is told to exit.
Rather than wrap everything in a try/catch you can bind an event listener to process just as the example in the docs:
process.on('uncaughtException', function(err) {
console.log('Caught exception: ' + err);
});
setTimeout(function() {
console.log('This will still run.');
}, 500);
// Intentionally cause an exception, but don't catch it.
nonexistentFunc();
console.log('This will not run.');
In the uncaughtException event, you can then add a setTimeout to exit after 10 seconds:
process.on('uncaughtException', function(err) {
console.log('Caught exception: ' + err);
setTimeout(function(){ process.exit(1); }, 10000);
});
If this exception is something you can recover from, you may want to look at domains: http://nodejs.org/api/domain.html
edit:
There may actually be another issue at hand: your client application doesn't do enough (or any?) logging. You can use log4js-node to write to a temp file or some application-specific location.
Easy way Solution:
Make a batch (.bat) file that starts nodejs
make a shortcut out of it
Why this is best. This way you client would run nodejs in command line. And even if nodejs program returns nothing would happen to command line.
Making bat file:
Make a text file
put START cmd.exe /k "node abc.js"
Save it
Rename It to abc.bat
make a shortcut or whatever.
Opening it will Open CommandLine and run nodejs file.
using settimeout for this is a bad idea.
The odd ones out are when you call process.exit() or there's an uncaught exception, as pointed out by Jim Schubert. Other than that, node will wait for the timeout to complete.
Node does remember timers, but only if it can keep track of them. At least that is my experience.
If you use setTimeout in an arrow / anonymous function I would recommend to keep track of your timers in an array, like:
=> {
timers.push(setTimeout(doThisLater, 2000));
}
and make sure let timers = []; isn't set in a method that will vanish, so i.e. globally.

Nodejs event handling

Following is my nodejs code
var emitter = require('events'),
eventEmitter = new emitter.EventEmitter();
eventEmitter.on('data', function (result) { console.log('Im From Data'); });
eventEmitter.on('error', function (result) { console.log('Im Error'); });
require('http').createServer(function (req, res) {
res.end('Response');
var start = new Date().getTime();
eventEmitter.emit('data', true);
eventEmitter.emit('error', false);
while(new Date().getTime() - start < 5000) {
//Let me sleep
}
process.nextTick(function () {
console.log('This is event loop');
});
}).listen(8090);
Nodejs is single threaded and it runs in an eventloop and the same thread serves the events.
So, in the above code on a request to my localhost:8090 node thread should be kept busy serving the request [there is a sleep for 5s].
At the same time there are two events being emitted by eventEmitter. So, both these events must be queued in the eventloop for processing once the request is served.
But that is not happening, I can see the events being served synchronously as they are emitted.
Is that expected? I understand that if it works as I expect then there would be no use of extending events module. But how are the events emitted by eventEmitter handled?
Only things that require asynchronous processing are pushed into the event loop. The standard event emitter in node will dispatch an event immediately. Only code using things like process.nextTick, setTimeout, setInterval, or code explicitly adding to it in C++ affect the event loop, like node's libraries.
For example, when you use node's fs library for something like createReadStream, it returns a stream, but opens the file in the background. When it is open, node adds to the event loop and when the function in the loop gets called, it will trigger the 'open' event on the stream object. Then, node will load blocks from the file in the background, and add to the event loop to trigger data events on the stream.
If you wanted those events to be emitted after 5 seconds, you'd want to use setTimeout or put the emit calls after your busy loop.
I'd also like to be clear, you should never have a busy loop like that in Node code. I can't tell if you were just doing it to test the event loop, or if it is part of some real code. If you need more info, please you expand on the functionality you are looking to achieve.

Resources