Network adapter failing when using node.js - node.js

I'm having this super bizarre problem at work; I've made two small node.js programs that interact with each other via HTTP get and post requests. Program B monitors some text files and sends a post request to Program A anytime the files change (once a minute). After acknowledging that request, Program A then sends a post request to program B, which then writes another set of text files.
The computer that hosts program B, at some (unpredictable) point in the day, just stops getting proper network connection—we'd be unable to use the browser, unable to remote desktop, and the node program also can't communicate, logging with ECONNRESET errors. It can, however, ping other machines in the network and on the internet. We thought there was a problem with the network card so we actually switched the machine which runs Program B and we still have the same problem.
We've noticed this only happens when Node.js is running, and is actively sending https requests to and from program A. A restart is the only solution we've found that sets things back to normal.
Our next idea was to install Wireshark and see what's up. The first thing that we noticed is this on the 'info' field:
3000→53550 [RST, ACK] Seq=2 Ack=1 Win=0 Len=0
It happens once every minute, and seems to be from the machine hosting Program A. Is this normal behavior? (Wireshark highlights it in red) Network communication is still normal at this point.
Eventually, it zero's out a TCP window and then things get worse:
We started to see duplicate acknowledgements and lots of retransmissions being sent:
and then finally the TCP window being full, and then reset:
Network communication fails at this point (except being able to ping).
I'm totally lost. I don't know whether this is a problem with my programming, or with node's implementation of network tools, or whether it's a driver / hardware issue, or a kernel issue.
We're running Windows 7 on both machines. Node 4.4.1
Any help or advice on what this could be would be really appreciated.
EDIT
Here's the some code snippets regarding the sending and receiving of information in Program B (I'm using Express.js) .
Receiving:
app.post('/eton_call', function (req, res){
logger.info("POST REQUEST: eton_call") //using winston.js to log things
reqCalls_writer.write_file(req.body); //writes the text file in a certain format
res.sendStatus(201);
})
app.post('/USERPRUN_call', function (req, res){
logger.info("POST REQUEST: userprun_call")
userPrun_writer.write_file(req.body);
res.sendStatus(201);
})
Sending: (they all use this prototyped function. The argument, snapshot, is some JSON)
Eton_file_sync.prototype.send_snapshot = function(snapshot) {
logger.info('sending snapshot');
var post_options = {
host: config.fex_server.host,
port: config.fex_server.port,
path: this.post_uri,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': snapshot.length
}
};
var post_req = http.request(post_options, function(res) {
logger.info("STATUS: " + res.statusCode);
res.setEncoding('utf8');
res.on('data', function (chunk) {
logger.info('Response: ' + chunk);
});
});
post_req.on('error', function(e){
logger.error('Snapshot not sent. Please ensure connectivity to fex-server.js. This snapshot will be lost. Error Message: ' + e.message);
})
post_req.write(snapshot);
post_req.end();
}
Here's the code for writing files
Eton_asc_writer.prototype.write_file = function (data_arr){
logger.info("writing file");
var buffer_data = '';
data_arr.forEach(function (data_element, index, array){
for (prop in data_element){
_.find(this, function(template_element){//finds the template_element with the correct descriptions
return template_element.description == prop; //^
}).datum = data_element[prop];//copies over datum to template element
}
//finds the first then second then third.. nth field.
for (var field_order = 1; field_order < this.length; field_order++){
var current_field = _.find(this, function(template_element){
return template_element.field_order == field_order;
});
if (!current_field.datum && (current_field.datum != 0)){
current_field.datum = "";
}
var required_padding_amount = current_field.required_length - current_field.datum.toString().length;
assert(required_padding_amount >= 0, "invalid length on " + current_field.description);
//assert(current_field.type == typeof current_field.datum, "datum types don't match");
for (var padding = ''; padding.length < required_padding_amount;)
padding += ' ';
if (current_field.type == "string"){
current_field.padded_datum = current_field.datum + padding + ';';
}
else {
current_field.padded_datum = padding + current_field.datum + ';';
}
buffer_data += current_field.padded_datum;
}
buffer_data += '\r\n';//buffer newline
}, this.template_in_use);
//console.log(buffer_data);
write_buffer = new Buffer(buffer_data, 'utf8');
var file = fs.createWriteStream(this.filename);
file.write(write_buffer);
file.end();
}
For Program A, the only post requests that would be happening at the time come from this: (req_calls is an array of objects that share the same key/value pairs)
function exportReqCalls(req_calls){
if (req_calls.length == 0){
return;
}
exported_req_calls = JSON.stringify(req_calls);
var post_options = {
host: config.eton_agent.host,
port: config.eton_agent.port,
path: '/eton_call',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': exported_req_calls.length
}
};
var post_req = http.request(post_options, function(res) {
logger.info("STATUS: " + res.statusCode);
res.setEncoding('utf8');
res.on('data', function (chunk) {
//console.log('Response: ' + chunk);
});
});
post_req.on('error', function(e){
logger.error('ReqCall not sent. Please ensure connectivity to eton-agent.js. Error Message: ' + e.message);
});
post_req.write(exported_req_calls);
post_req.end();
}

Related

How to start PostgreSQL message flow protocol using node.js net.Socket

I am able to send a Startup Message to the PostgreSQL server I have running and get a response from that server. I get ParameterStatus messages. The problem is I never get any type of Authentication message. My question is this: Why is it that the server never sends any type of Authentication message back to me?
Below I will show you my code snippet for understanding how the startup part of the protocol works, a couple lines of what it outputs for debugging (so that hopefully you won't have to even read my code), what I think is useful information from the PostgreSQL Documentation for understanding my question and another resource I have found to be useful for visualizing the protocol.
This is my code:
var net = require('net');
var BlueBird = require('bluebird');
var Buffer = require('buffer').Buffer;
var createStartupMessage = function(user_name, database_name){
var buffer_size = 22 + user_name.length + 1 + database_name.length + 1 + 1;
var StartUpMessage = new Buffer(buffer_size);
var position_in_buffer = 0;
StartUpMessage.writeUInt32BE(buffer_size, 0);
position_in_buffer += 4;
StartUpMessage.writeUInt32BE(196608, position_in_buffer); //version 3.0
position_in_buffer += 4;
position_in_buffer = addMessageSegment(StartUpMessage, "user", position_in_buffer);
position_in_buffer = addMessageSegment(StartUpMessage, user_name, position_in_buffer);
position_in_buffer = addMessageSegment(StartUpMessage, "database", position_in_buffer);
position_in_buffer = addMessageSegment(StartUpMessage, database_name, position_in_buffer);
//Add the last null terminator to the buffer
addNullTerminatorToMessageSegment(StartUpMessage, position_in_buffer);
console.log("The StartUpMessage looks like this in Hexcode: " + StartUpMessage.toString('hex'));
console.log("The length of the StartupMessage in Hexcode is: " + StartUpMessage.toString('hex').length);
return StartUpMessage;
};
var addMessageSegment = function(StartUpMessage, message_segment, position_in_buffer){
var bytes_in_message_segment = Buffer.byteLength(message_segment);
StartUpMessage.write(message_segment, position_in_buffer, StartUpMessage - position_in_buffer, 'utf8');
position_in_buffer = position_in_buffer + bytes_in_message_segment;
position_in_buffer = addNullTerminatorToMessageSegment(StartUpMessage, position_in_buffer);
return position_in_buffer;
};
var addNullTerminatorToMessageSegment = function(StartUpMessage, position_in_buffer){
StartUpMessage.writeUInt8(0, position_in_buffer);
position_in_buffer = position_in_buffer + 1;
return position_in_buffer;
};
//Here is where everything starts. The functions above are called within this BlueBird Promise.
BlueBird.coroutine(function* () {
var host = "127.0.0.1";
var port = "5432";
var idle_timeout = 10000;
var MySocket = new net.Socket();
MySocket.setTimeout(idle_timeout);
var StartUpMessage = createStartupMessage("testusertwo", "testdatabasetwo");
var data = yield new Promise(
function resolver(resolve, reject) {
var number_of_responses = 0;
var number_of_responses_to_wait_for = 2;
MySocket.on('connect', function () {
var message = StartUpMessage.toString("utf8");
var flushed = MySocket.write(message, "utf8");
console.log("Message flushed to kernel: " + flushed);
});
MySocket.on('data', function (data) {
console.log("The response from the server is: " + data.toString('utf8'));
console.log("----This Line Divides the Response Below from the Response Above----");
if( number_of_responses !== number_of_responses_to_wait_for){
number_of_responses += 1;
} else {
resolve(data);
}
});
MySocket.on('error', function (error) {
reject(error);
});
MySocket.connect(port, host);
}
);
return data;
})()
.then(function (data) {
return data;
})
.catch(function (error) {
console.error(error);
});
These are a couple lines from what my code outputs for debugging purposes. It shows the hexcode representation of the initial utf-8 encoded message I send to the server (startup message format is shown on slide 9 via the link at the bottom). Then it shows the servers response.
After this my program hangs waiting where I am waiting to see it send an Authentication class of message. In the Startup Message I have bolded the first two 32 bit Big Endian integers and all the null terminators for convenience. Also, the ? marks at the end (in ?M2\??ZI) are really those diamond question marks from utf-8 and this ending part changes on every run as well. I do not know why.
Some output from my code:
The StartUpMessage looks like this in Hexcode:
**0000003300030000**75736572**00**746573747573657274776f**00**6461746162617365**00**74657374646174616261736574776f**0000**
The response from the server is:
Sapplication_nameSclient_encodingUTF8SDateStyleISO, MDYSinteger_datetimesonSntervalStylepostgresSis_superuseroffSserver_encodingUTF8Sserver_version9.5.0S&session_authorizationtestusertwoS#standard_conforming_stringsonSTimeZoneUS/EasternK
?M2\??ZI
This is what I think is relevant Information from the Postgresql Documentation:
50.2.Message Flow.1.Start-up:
To begin a session, a frontend opens a connection to the server and sends a startup message.
The authentication cycle ends with the server either rejecting the connection attempt (ErrorResponse), or sending AuthenticationOk.
This section says some other things as well that make it sound like I should either get one of the many Authentication messages listed (such as AuthenticationCleartextPassword message) or an AuthenticationOk if a password is not needed and everything happens without an error. If there is an error, then I should get an ErrorResponse message.
50.5.Message Formats:
In this section it is indicated that if the first Byte in the server response is ’S’, then the Message is classified as a ParameterStatus message.
In this section it also indicates that if the first Byte in the server response is ‘R’, then the Message is classified as an Authentication message.
The useful resource I found:
I think this is a very good resource for visualizing the message flow protocol. The authors name is Jan Urban ́ski. On slide 9, the startup packet is shown. The only thing I’ve found (with node.js anyway) is there needs to be another null terminator box before the . . . box.
https://www.pgcon.org/2014/schedule/attachments/330_postgres-for-the-wire.pdf
After looking on Wireshark, I realized that I was getting an Authentication message ('R' type Message). The problem was that I was parsing the data from the server incorrectly. I immediately converted it to a UTF8 string. The data needs to be parsed according to the message formats before any of it can be converted to UTF8. This is because the formats are not just a bunch of chars strung together. They include 32 bit big endian ints and 16 big endian ints.

Node.js network performance suffers drastically under load testing on AWS

I have the following Node.js code written as a very basic HTTP server. It's purpose is to ingest large numbers of requests containing base64 data, and write that data to S3 as an image. The S3-writing aspect is working fantastically and has no issues. However, the initial request seems to take an abnormally long time under load.
server.js
http.createServer(function(req, res){
if (url.parse(req.url).pathname == '/processimage' && req.method.toLowerCase() == 'post') {
var startTime = new Date();
var rawBody = '';
req.on('data', function(chunk) {
rawBody += chunk;
});
req.on('end', function() {
console.log('REQUEST FINISHED: ' + (new Date() - startTime) + ' ms');
// Process image, upload to S3
res.writeHead(200);
res.end('data');
}
return;
} else {
// Other requests
}
}).listen(1347);
I am also timing the image processing section, but it is performing fine and not relevant to this question.
To test this, I have written a script that POSTs test data containing the base64 of approximately 500k characters (2-3mb original images). When testing this locally, everything works fine. My output is:
REQUEST FINISHED: 9ms
REQUEST FINISHED: 23ms
REQUEST FINISHED: 18ms
etc.
However, after deploying the code to AWS on an x-large instance, I see the following:
REQUEST FINISHED: 499ms
REQUEST FINISHED: 2493ms
REQUEST FINISHED: 1784ms
REQUEST FINISHED: 3440ms
REQUEST FINISHED: 994ms
REQUEST FINISHED: 36043ms
Essentially, when stress testing this, approximately 1 in every 30 requests seems to take 10+ seconds (even 30+ seconds in some cases) just to go through the request pipeline. As you can see in my code, there is zero processing being done on the data before the times are calculated, so this means that somewhere between "req.on('data')" and "req.on('end')", there is a massive delay.
My question is: is there some kind of processing happening between req.on('data') and req.on('end') that would cause this POST to take so long? Is it possible that the host machine is choking on these requests for some reason (Ubuntu 12.04, x-large instance, 14GB memory, 4x CPUs)?
This is likely going to take some extensive logging to see what is happening. You have multiple requests, each with large data which will be processed in multiple chunks. I'd say that the first thing to do is to log exactly when each chunk from each request is processed and then you can get an idea what order things are happening in and see where that leads. Here's a first pass idea at logging that:
// helper function
logDelta(id, start, msg) {
var delta = new Date() - start;
console.log(id + ": (" + delta + ") - " + msg);
}
var reqCntr = 0;
http.createServer(function(req, res){
if (url.parse(req.url).pathname == '/processimage' && req.method.toLowerCase() == 'post') {
// place an id on the request for logging purposes
req.trackerID = reqCntr++;
console.log(req.trackerID + ": Begin Request");
var startTime = new Date();
var rawBody = '';
var chunkCntr = 0;
req.on('data', function(chunk) {
logDelta(req.trackerID, startTime, "chunk(" + chunkCntr + "), length = " + chunk.length);
++chunkCntr;
rawBody += chunk;
});
req.on('end', function() {
logDelta(req.trackerID, startTime, "Request Finished");
// Process image, upload to S3
res.writeHead(200);
res.end('data');
}
return;
} else {
// Other requests
}
}).listen(1347);
Then, you may have to do some processing of the log data in order to be able to follow the timing of each event in each individual request, particularly the long running ones. This will likely then offer you a clue about where to look next.
FYI, there are a bunch of different places in the stack where you could be hitting a bottleneck. If, for example, you're firing data at the server faster than it can be processed (either at the OS level or at the node level), then the TCP buffers will fill up at some point and incoming packets will be dropped or the socket will be placed into some sort of flow control.
If you're running on a shared server, you may not have access to the whole TCP buffer either.
Here's a scheme that would collect all the history for each connection and then output it to the log at once (making dissection of a single connection easier, but obscuring the sequence of events among different connections).
var reqCntr = 0;
http.createServer(function(req, res){
var log = [];
var id;
var startTime = new Date();
// helper function
logDelta(msg) {
var delta = new Date() - startTime;
log.push(id + ": (" + delta + ") - " + msg);
}
if (url.parse(req.url).pathname == '/processimage' && req.method.toLowerCase() == 'post') {
// place an id on the request for logging purposes
id = reqCntr++;
log.push(id + ": Begin Request");
var rawBody = '';
var chunkCntr = 0;
req.on('data', function(chunk) {
logDelta("chunk(" + chunkCntr + "), length = " + chunk.length);
++chunkCntr;
rawBody += chunk;
});
req.on('end', function() {
// dump connection history to console.log()
logDelta("Request Finished");
console.log(log.join("/n"));
// Process image, upload to S3
res.writeHead(200);
res.end('data');
}
return;
} else {
// Other requests
}
}).listen(1347);

Send out real time data to webclients error trapping

Trying to send data from a serial device to web clients. I am using a serial to network proxy, ser2Net to make the data available to a server that acts on the data and sends a manipulated version of the data to web clients. The clients specify the location of the ser2net host and port. The core of this action is coded in node.js as shown here:
function getDataStream(socket, dataSourcePort, host) {
var dataStream = net.createConnection(dataSourcePort, host),
dataLine = "";
dataStream.on('error', function(error){
socket.emit('error',{message:"Source not found on host:"+ host + " port:"+dataSourcePort});
console.log(error);
});
dataStream.on('connect', function(){
socket.emit('connected',{message:"Data Source Found"});
});
dataStream.on('close', function(){
console.log("Close socket");
});
dataStream.on('end',function(){
console.log('socket ended');
dataConnection.emit('lost',{connectInfo:{host:host,port:dataSourcePort}});
});
dataStream.on('data', function(data) {
// Collect a line from the host
line += data.toString();
// Split collected data by delimiter
line.split(delimiter).forEach(function (part, i, array) {
if (i !== array.length-1) { // Fully delimited line.
//push on to buffer and emit when bufferSendCommand is present
dataLine = part.trim();
buffer.push(part.trim());
if(part.substring(0, bufferSendCommand.length) == bufferSendCommand){
gotALine.emit('new', buffer);
buffer=[];
}
}
else {
// Last split part might be partial. We can't announce it just yet.
line = part;
}
});
});
return dataStream;
}
io.sockets.on('connection', function(socket){
var stream = getDataStream(socket, dataSourcePort, host);
//dispense incoming data from data server
gotALine.on('new', function(buffer){
socket.emit('feed', {feedLines: buffer});
});
dataConnection.on('lost', function(connectInfo){
setTimeout(function(){
console.log("Trying --- to reconnect ");
stream = getDataStream(socket, connectInfo.port, connectInfo.host);
},5000);
});
// Handle Client request to change stream
socket.on('message',function(data) {
var clientMessage = JSON.parse(data);
if('connectString' in clientMessage
&& clientMessage.connectString.dataHost !== ''
&& clientMessage.connectString.dataPort !== '') {
stream.destroy();
stream = getDataStream(socket,
clientMessage.connectString.dataPort,
clientMessage.connectString.dataHost);
}
});
});
This works well enough until the serial device drops off and ser2net stops sending data. My attempt to catch the end of the socket and reconnect is not working. The event gets emitted properly but the setTimeout only goes once. I would like to find a way to keep on trying to reconnect while sending a message to the client informing or retry attempts. I am node.js newbie and this may not be the best way to do this. Any suggestions would be appreciated.
Ok I think I figured it out in the dataStream.on('data' ... I added a setTimeout
clearTimeout(connectionMonitor);
connectionMonitor = setTimeout(function(){doReconnect(socket);}, someThresholdTime);
The timeout executes if data stops coming in, as it is repeatedly cleared each time data comes in. The doReconnect function keeps trying to connect and sends a message to the client saying something bad is going on.

node.js process out of memory in http.request loop

In my node.js server i cant figure out, why it runs out of memory. My node.js server makes a remote http request for each http request it receives, therefore i've tried to replicate the problem with the below sample script, that also runs out of memory.
This only happens if the iterations in the for loop are very high.
From my point of view, the problem is related to the fact that node.js is queueing the remote http requests. How to avoid this?
This is the sample script:
(function() {
var http, i, mypost, post_data;
http = require('http');
post_data = 'signature=XXX%7CPSFA%7Cxxxxx_value%7CMyclass%7CMysubclass%7CMxxxxx&schedule=schedule_name_6569&company=XXXX';
mypost = function(post_data, cb) {
var post_options, req;
post_options = {
host: 'myhost.com',
port: 8000,
path: '/set_xxxx',
method: 'POST',
headers: {
'Content-Length': post_data.length
}
};
req = http.request(post_options, function(res) {
var res_data;
res.setEncoding('utf-8');
res_data = '';
res.on('data', function(chunk) {
return res_data += chunk;
});
return res.on('end', function() {
return cb();
});
});
req.on('error', function(e) {
return console.debug('TM problem with request: ' + e.message);
});
req.write(post_data);
return req.end;
};
for (i = 1; i <= 1000000; i++) {
mypost(post_data, function() {});
}
}).call(this);
$ node -v
v0.4.9
$ node sample.js
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Tks in advance
gulden PT
Constraining the flow of requests into the server
It's possible to prevent overload of the built-in Server and its HTTP/HTTPS variants by setting the maxConnections property on the instance. Setting this property will cause node to stop accept()ing connections and force the operating system to drop requests when the listen() backlog is full and the application is already handling maxConnections requests.
Throttling outgoing requests
Sometimes, it's necessary to throttle outgoing requests, as in the example script from the question.
Using node directly or using a generic pool
As the question demonstrates, unchecked use of the node network subsystem directly can result in out of memory errors. Something like node-pool makes the active pool management attractive, but it doesn't solve the fundamental problem of unconstrained queuing. The reason for this is that node-pool doesn't provide any feedback about the state of the client pool.
UPDATE: As of v1.0.7 node-pool includes a patch inspired by this post to add a boolean return value to acquire(). The code in the following section is no longer necessary and the example with the streams pattern is working code with node-pool.
Cracking open the abstraction
As demonstrated by Andrey Sidorov, a solution can be reached by tracking the queue size explicitly and mingling the queuing code with the requesting code:
var useExplicitThrottling = function () {
var active = 0
var remaining = 10
var queueRequests = function () {
while(active < 2 && --remaining >= 0) {
active++;
pool.acquire(function (err, client) {
if (err) {
console.log("Error acquiring from pool")
if (--active < 2) queueRequests()
return
}
console.log("Handling request with client " + client)
setTimeout(function () {
pool.release(client)
if(--active < 2) {
queueRequests()
}
}, 1000)
})
}
}
queueRequests(10)
console.log("Finished!")
}
Borrowing the streams pattern
The streams pattern is a solution which is idiomatic in node. Streams have a write operation which returns false when the stream cannot buffer more data. The same pattern can be applied to a pool object with acquire() returning false when the maximum number of clients have been acquired. A drain event is emitted when the number of active clients drops below the maximum. The pool abstraction is closed again and it's possible to omit explicit references to the pool size.
var useStreams = function () {
var queueRequests = function (remaining) {
var full = false
pool.once('drain', function() {
if (remaining) queueRequests(remaining)
})
while(!full && --remaining >= 0) {
console.log("Sending request...")
full = !pool.acquire(function (err, client) {
if (err) {
console.log("Error acquiring from pool")
return
}
console.log("Handling request with client " + client)
setTimeout(pool.release, 1000, client)
})
}
}
queueRequests(10)
console.log("Finished!")
}
Fibers
An alternative solution can be obtained by providing a blocking abstraction on top of the queue. The fibers module exposes coroutines that are implemented in C++. By using fibers, it's possible to block an execution context without blocking the node event loop. While I find this approach to be quite elegant, it is often overlooked in the node community because of a curious aversion to all things synchronous-looking. Notice that, excluding the callcc utility, the actual loop logic is wonderfully concise.
/* This is the call-with-current-continuation found in Scheme and other
* Lisps. It captures the current call context and passes a callback to
* resume it as an argument to the function. Here, I've modified it to fit
* JavaScript and node.js paradigms by making it a method on Function
* objects and using function (err, result) style callbacks.
*/
Function.prototype.callcc = function(context /* args... */) {
var that = this,
caller = Fiber.current,
fiber = Fiber(function () {
that.apply(context, Array.prototype.slice.call(arguments, 1).concat(
function (err, result) {
if (err)
caller.throwInto(err)
else
caller.run(result)
}
))
})
process.nextTick(fiber.run.bind(fiber))
return Fiber.yield()
}
var useFibers = function () {
var remaining = 10
while(--remaining >= 0) {
console.log("Sending request...")
try {
client = pool.acquire.callcc(this)
console.log("Handling request with client " + client);
setTimeout(pool.release, 1000, client)
} catch (x) {
console.log("Error acquiring from pool")
}
}
console.log("Finished!")
}
Conclusion
There are a number of correct ways to approach the problem. However, for library authors or applications that require a single pool to be shared in many contexts it is best to properly encapsulate the pool. Doing so helps prevent errors and produces cleaner, more modular code. Preventing unconstrained queuing then becomes an evented dance or a coroutine pattern. I hope this answer dispels a lot of FUD and confusion around blocking-style code and asynchronous behavior and encourages you to write code which makes you happy.
yes, you trying to queue 1000000 requests before even starting them. This version keeps limited number of request (100):
function do_1000000_req( cb )
{
num_active = 0;
num_finished = 0;
num_sheduled = 0;
function shedule()
{
while (num_active < 100 && num_sheduled < 1000000) {
num_active++;
num_sheduled++;
mypost(function() {
num_active--;
num_finished++;
if (num_finished == 1000000)
{
cb();
return;
} else if (num_sheduled < 1000000)
shedule();
});
}
}
}
do_1000000_req( function() {
console.log('done!');
});
the node-pool module can help you. For more détails, see this post (in french), http://blog.touv.fr/2011/08/http-request-loop-in-nodejs.html

Having trouble understanding node.js listeners

I'm working on two node.js tutorials at the moment and while I understand what is going on within each tutorial, I clearly don't understand what's going on that well.
The following code listens for "data" events and then adds new chunks of data to a variable named postData. Another listener sends this data along with other stuff to my route.js file.
request.addListener("data", function (postDataChunk) {
postData += postDataChunk;
console.log("Received POST data chunk '" + postDataChunk + "'.");
});
request.addListener("end", function () {
route(handle, pathname, response, postData);
});
The following code creates a variable, tailChild, that spawns the shell command 'tail' on my system log and then attempts to add this data to my postData variable:
var spawn = require('child_process').spawn;
var tail_child = spawn('tail', ['-f', '/var/log/system.log']);
tail_child.stdout.on('data', function (data) {
postData += data;
console.log("TAIL READING: " + data);
});
tail_child.stdout.on('end', function () {
route(handle, pathname, response, postData);
});
Now my console is updated in realtime with system.log data but my browser times out with a "No data received error."
I've tried tweaking the code above to figure what is going wrong and as near as I can tell node is telling me that var data is null so it is adding nothing to var postData. This doesn't make sense to me since console.log("TAIL READING: " + data) gives me the results of spawn('tail', ['-f', '/var/log/system.log']) in my terminal window. Clearly var data is not null.
Edit:
Here's a pastebin link to my server.js code
tail -f won't trigger the end callback so you never respond to the user.

Resources