Node.js TCP socket waits seconds before polling? (net.createServer) - node.js

I have a very simple TCP socket in Node.js. It connects to a device that sends data back in XML format. There is a C# program that does this same trick, but I had to build it in Node.js.
So, when the device sends a message, I'm getting the response about 5 seconds later! Where the C# program gets it 1 or 2 seconds later.
It looks like the 'tcp socket' has a specific polling frequency or some kind of 'wait function'. Is that even possible? Everytime an incoming message displays. It also display's the exit message of "sock.on('close')"
It seems that after 5 seconds the 'server' automaticly closes. See bottom lines "console.log('[LISTENER] Connection paused.');" After that, the incoming message gets displayed correctly.
What is wrong with my code?
// Set Node.js dependencies
var fs = require('fs');
var net = require('net');
// Handle uncaughtExceptions
process.on('uncaughtException', function (err) {
console.log('Error: ', err);
// Write to logfile
var log = fs.createWriteStream('error.log', {'flags': 'a'});
log.write(err+'\n\r');
});
/*
-------------------------------------------------
Socket TCP : TELLER
-------------------------------------------------
*/
var oHOST = '10.180.2.210';
var oPORT = 4100;
var client = new net.Socket();
client.connect(oPORT, oHOST, function() {
console.log('TCP TELLER tells to: ' + oHOST + ':' + oPORT);
// send xml message here, this goes correct!
client.write(oMessage);
});
// Event handler: incoming data
client.on('data', function(data) {
// Close the client socket completely
client.destroy();
});
// Event handler: close connection
client.on('close', function() {
console.log('[TELLER] Connection paused.');
});
/*
-------------------------------------------------
Socket TCP : LISTENER
-------------------------------------------------
*/
var iHOST = '0.0.0.0';
var iPORT = 4102;
// Create a server instance, and chain the listen function to it
var server = net.createServer(function(sock) {
// We have a connection - a socket object is assigned to the connection automatically
console.log('TCP LISTENER hearing on: ' + sock.remoteAddress +':'+ sock.remotePort);
// Event handler: incoming data
sock.on('data', function(data) {
console.log('Message: ', ' '+data);
});
// Event handler: close connection
sock.on('close', function(data) {
console.log('[LISTENER] Connection paused.');
});
}).listen(iPORT, iHOST);

client.write() does not always transmit data immediately, it will wait until buffers are full before sending the packet. client.end() will close the socket and flush the buffers.

You could try this: http://nodejs.org/api/net.html#net_socket_setnodelay_nodelay
Your 5 second delay does seem a bit weird, though.

So within the "TELLER" I had to write "sock.write(data);" inside the "sock.on('data', function(data)" event.
It works now. Thanks Jeremy and rdrey for helping me in the right direction.

Related

WebSocket stops receiving data after 15 - 20 minutes of data stream - NodeJS

Code
var websock = net.createServer(function(sock) {
sock.pipe(sock);
sock.setEncoding('utf8');
sock.setKeepAlive(true);
sock.on("data", function(d) {
console.log("websock", d);
});
sock.on('end', function() {
console.log('websock disconnected');
});
});
websock.listen(777, '127.0.0.1');
After few minutes ~15 mins the callback code in sock.on("data", function() {}) seems not to be working. why is it the case? I checked the console.log, there is no log with a string "websock disconnected".
if the socket is not disconnected and if there is no error, what has happened to the socket connection or the data stream?
On the other end, (Server side, data sender) seems to be streaming data continuously while client side (nodejs app) has stopped receiving data.
The issue arises from your use of the pipe mechanism to echo back data which is never consumed on the original side (communication is unidirectional):
sock.pipe(sock);
This makes your code work as an echo server. Your socket "sock" is a duplex stream (i.e. both readable - for the incoming data you receive, and writable - for outgoing data you send back).
A quick fix if you don't need to respond back and you just need to receive data is to simply delete the "sock.pipe(sock);" line. To find out the explanation, read ahead.
Most probably your data source (the MT5 application you mentioned) sends data continuously and it doesn't read what you send back at all. So, your code keeps echoing back the received data using sock.pipe(sock), filling the outgoing buffer which is never consumed. However, the pipe mechanism of Nodejs streams handles backpressure, which means that when two streams (a readable and a writable one) are connected by a pipe, if the outgoing buffer is filling (reaching a high watermark), the readable stream is paused, to prevent the "overflow" of the writable stream.
You can read more about backpressure in the Nodejs docs. This fragment particularly describes how streams are handling backpressure:
In Node.js the source is a Readable stream and the consumer is the Writable stream [...]
The moment that backpressure is triggered can be narrowed exactly to the return value of a Writable's .write() function. [...]
In any scenario where the data buffer has exceeded the highWaterMark or the write queue is currently busy, .write() will return false.
When a false value is returned, the backpressure system kicks in. It will pause the incoming Readable stream from sending any data and wait until the consumer is ready again.
Below you can find my setup to show where backpressure kicks in; there are two files, server.js and client.js. If you run them both, server will write to console "BACKPRESSURE" soon. As the server is not handling backpressure (it ignores that sock.write starts returning false at some point), the outgoing buffer is filled and filled, consuming more memory, while in your scenario, socket.pipe was handling backpressure and thus it paused the flow of the incoming messages.
The server:
// ----------------------------------------
// server.js
var net = require('net');
var server = net.createServer(function (socket) {
console.log('new connection');
// socket.pipe(socket); // replaced with socket.write on each 'data' event
socket.setEncoding('utf8');
socket.setKeepAlive(true);
socket.on("data", function (d) {
console.log("received: ", d);
var result = socket.write(d);
console.log(result ? 'write ok' : 'BACKPRESSURE');
});
socket.on('error', function (err) {
console.log('client error:', err);
});
socket.on('end', function () {
console.log('client disconnected');
});
});
server.listen(10777, '127.0.0.1', () => {
console.log('server listening...');
});
The client:
// ----------------------------------------
// client.js
var net = require('net');
var client = net.createConnection(10777, () => {
console.log('connected to server!' + new Date().toISOString());
var count = 1;
var date;
while(count < 35000) {
count++;
date = new Date().toISOString() + '_' + count;
console.log('sending: ', date);
client.write(date + '\n');
}
});
client.on('data', (data) => {
console.log('received:', data.toString());
});
client.on('end', () => {
console.log('disconnected from server');
});

use node.js cluster with socket.io chat application

I'm trying to learn node.js cluster with socket.io to create a chat application... the problem is that I can't seem to get things working.
i've been trying to go through all the tutorials including the one that I get from this http://stackoverflow.com/questions/18310635/scaling-socket-io-to-multiple-node-js-processes-using-cluster/18650183#18650183
when I try to open two browsers, the messages does not go to the other browser.
here's the code that i got
var express = require('express'),
cluster = require('cluster'),
net = require('net'),
socketio = require('socket.io'),
socket_redis = require('socket.io-redis');
var port = 3000,
num_processes = require('os').cpus().length;
if (cluster.isMaster) {
// This stores our workers. We need to keep them to be able to reference
// them based on source IP address. It's also useful for auto-restart,
// for example.
var workers = [];
// Helper function for spawning worker at index 'i'.
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(code, signal) {
console.log('respawning worker', i);
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
// Helper function for getting a worker index based on IP address.
// This is a hot path so it should be really fast. The way it works
// is by converting the IP address to a number by removing non numeric
// characters, then compressing it to the number of slots we have.
//
// Compared against "real" hashing (from the sticky-session code) and
// "real" IP number conversion, this function is on par in terms of
// worker index distribution only much faster.
var worker_index = function(ip, len) {
var s = '';
for (var i = 0, _len = ip.length; i < _len; i++) {
if (!isNaN(ip[i])) {
s += ip[i];
}
}
return Number(s) % len;
};
// Create the outside facing server listening on our port.
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var worker = workers[worker_index(connection.remoteAddress, num_processes)];
worker.send('sticky-session:connection', connection);
}).listen(port);
} else {
// Note we don't use a port here because the master listens on it for us.
var app = new express();
// Here you might use middleware, attach routes, etc.
app.use('/assets', express.static(__dirname +'/public'));
app.get('/', function(req, res){
res.sendFile(__dirname + '/index.html');
});
// Don't expose our internal server to the outside.
var server = app.listen(),
io = socketio(server);
// Tell Socket.IO to use the redis adapter. By default, the redis
// server is assumed to be on localhost:6379. You don't have to
// specify them explicitly unless you want to change them.
io.adapter(socket_redis({ host: 'localhost', port: 6379 }));
// Here you might use Socket.IO middleware for authorization etc.
io.on('connection', function(socket) {
console.log('New client connection detected on process ' + process.pid);
socket.emit('welcome', {message: 'Welcome to BlueFrog Chat Room'});
socket.on('new.message', function(message) {
socket.emit('new.message', message);
})
});
// Listen to messages sent from the master. Ignore everything else.
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
connection.resume();
});
}
If I understand correctly, your problem is that the messages from a client are not broadcasted to the other clients. you can solve this easily using :
io.on('connection', function(socket) {
console.log('New client connection detected on process ' + process.pid);
socket.emit('welcome', {message: 'Welcome to BlueFrog Chat Room'});
socket.on('new.message', function(message) {
socket.emit('new.message', message); // this line sends the message back to the emitter
socket.broadcast.emit('my message', msg); // this broadcasts the message to all the clients
})
});
There are different ways to emit a message. The one you're using emits the message only to the socket that first sent a 'new.message' message to the server. Which means that a socket will receive the message that you emit there only if it first sent a message 'new.message'. That's why, in your browser, the client originating the message is the only one receiving it back.
Change it to:
socket.on('new.message', function(message) {
io.sockets.emit('new.message', message);//use this if even the browser originating the message should be updated.
socket.broadcast.emit('new.message', message);//use this if everyone should be updated excpet the browser source of the message.
})
Here are the different ways you can emit:
io.sockets.on('connection', function(socket) {
//This message is only sent to the client corresponding to this socket.
socket.emit('private message', 'only you can see this');
//This message is sent to every single socket connected in this
//session, including this very socket.
io.sockets.emit('public message', 'everyone sees this');
//This message is sent to every single connected socket, except
//this very one (the one requesting the message to be broadcasted).
socket.broadcast.emit('exclude sender', 'one client wanted all of you to see this');
});
You can also add sockets to different rooms when they connect so that you only communicate messages with sockets from a given room:
io.sockets.on('connection', function(socket) {
//Add this socket to a room called 'room 1'.
socket.join('room 1');
//This message is received by every socket that has joined
//'room 1', including this one. (Note that a socket doesn't
//necessarily need to belong to a certain room to be able to
//request messages to be sent to that room).
io.to('room 1').emit('room message', 'everyone in room 1 sees this');
//This message is received by every socket that has joined
//'room 1', except this one.
socket.broadcast.to('room 1').emit('room message', 'everyone in room 1 sees this');
});

Create sockettimeoutexception with node js server

So I modified the accepted answer in this thread How do I shutdown a Node.js http(s) server immediately? and was able to close down my nodejs server.
// Create a new server on port 4000
var http = require('http');
var server = http.createServer(function (req, res) { res.end('Hello world!'); }).listen(4000);
// Maintain a hash of all connected sockets
var sockets = {}, nextSocketId = 0;
server.on('connection', function (socket) {
// Add a newly connected socket
var socketId = nextSocketId++;
sockets[socketId] = socket;
console.log('socket', socketId, 'opened');
// Remove the socket when it closes
socket.on('close', function () {
console.log('socket', socketId, 'closed');
delete sockets[socketId];
});
});
...
when (bw == 0) /*condition to trigger closing the server*/{
// Close the server
server.close(function () { console.log('Server closed!'); });
// Destroy all open sockets
for (var socketId in sockets) {
console.log('socket', socketId, 'destroyed');
sockets[socketId].destroy();
}
}
However, on the client side, the client throws a ConnectionException because of the server.close() statement. I want the client to throw a SocketTimeoutException, which means the server is active, but the socket just hangs. Someone else was able to do this with a jetty server from Java
Server jettyServer = new Server(4000);
...
when (bw == 0) {
server.getConnectors()[0].stop();
}
Is it possible to achieve something like that? I've been stuck on this for a while, any help is extremely appreciated. Thanks.
What you ask is impossible. A SocketTimeoutException is thrown when reading from a connection that hasn't terminated but hasn't delivered any data during the timeout period.
A connection closure does not cause it. It doesn't cause a ConnectionException either, as there is no such thing. It causes either an EOFException, a null return from readLine(), a -1 return from read(), or an IOException: connection reset if the close was abortive.
Your question doesn't make sense.

Windows 2012 server not executing socket.on

I am trying to learn to use NodeJS and JavaScript to replace at least some of my Perl code.
I need to create a socket and have a server/listener accept data sent from a client.
The problem I am having is that under Windows 2012 server, the listener code below just completely ignores the socket.on command, but it works fine under CentOS.
Does anyone have any suggestions as to what I am missing?
var net = require('net');
var fs = require('fs');
var server = net.createServer(function(socket) {
console.log('At : ' + (new Date()) + '\nA client connected to server...');
console.log('IP addr : ' + socket.remoteAddress);
// Process data sent from client
socket.on('data', function(data) {
console.log('Reached socket.on function.\n');
// Following command reads the data stream from client
var string = ('IP addr : '+ socket.remoteAddress + ' sent on ' + (new Date()) + ' : ' + data.toString()+'\n');
console.log(string);
// Following command writes the data stream to a file
fs.appendFile('client-data', string, function(err) {
if (err) {
return console.log(err);
}
console.log('Data saved from client...');
});
});
// This section sends data back to client, close socket, and
// resets to accept new socket connection
socket.write('\n<svrdata>\nData received by : HS-AD02 \r\n</svrdata>');
socket.pipe(socket);
socket.end();
console.log('The client has disconnected...\n');
}).listen(10337, 'hs-ad02');
In the code above the problem has a very simple solution.
Under Windows the two lines of
socket.pipe(socket);
socket.end();
Cause the socket to close prior to the receipt of the client data.
Removing those two lines and it works just fine.

Creating a server listening on a `net` stream and reply using Node.js

Can someone give me a working example on how to create a server listening on the stream and reply when there is a request coming through.
Here's what I have so far:
var port = 4567,
net = require('net');
var sockets = [];
function receiveData(socket, data) {
console.log(data);
var dataToReplyWith = ' ... ';
// ... HERE I need to reply somehow with something to the client that sent the initial data
}
function closeSocket(socket) {
var i = sockets.indexOf(socket);
if (i != -1) {
sockets.splice(i, 1);
}
}
var server = net.createServer(function (socket) {
console.log('Connection ... ');
sockets.push(socket);
socket.setEncoding('utf8');
socket.on('data', function(data) {
receiveData(socket, data);
})
socket.on('end', function() {
closeSocket(socket);
});
}).listen(port);
Will the socket.write(dataToReplyWith); do it?
Yes, you can just write to the socket whenever (as long as the socket is still writable of course). However the problem you may run into is that one data event may not imply a complete "request." Since TCP is just a stream, you can get any amount of data which may not align along your protocol message boundaries. So you could get half a "request" on one data event and then the other half on another, or the opposite: multiple "requests" in a single data event.
If this is your own custom client/server design and you do not already have some sort of protocol in place to determine "request"/"response" boundaries, you should incorporate that now.

Resources