I have a node.js sample where a client socket makes two writes to a server. I'm trying to make sure the server receives the writes one by one, using the socket.write with a callback:
var net = require('net');
const HOST = '127.0.0.1';
const PORT = 7000;
var server = new net.Server(socket => {
socket.on('data', data => {
console.log("Server received: " + data);
})
});
server.listen(PORT, HOST);
var client = new net.Socket();
client.connect(PORT, HOST);
client.write("call 1", "utf8", () => {
client.write("call 2");
});
When I run it I get output:
Server received: call 1call 2
According to the docs here https://nodejs.org/api/net.html#net_socket_write_data_encoding_callback:
... The optional callback parameter will be executed when the data is finally written out...
What does data is finally written out mean? How can I make the server produce:
Server received: call 1
Server received: call 2
Thanks,
Dinko
You are dealing with a stream. It does not know anything about the beginning and end of your messages.
You need to add delimiter (eg \n: client.write("call 2\n");)
You need split data by delimiter on the receiver (eg node split package).
You can set a timeout for the second event.
client.write("call 1", "utf8")
setTimeout(() => {
client.write("call 2");
}, 100);
Related
I'm trying to write a very simple node TCP server which reads in the full input stream and writes out some function of the input. The output cannot be generated without the full input so the writes cannot be streamed as the input comes in. For simplicity sake in this post, I have omitted the actual transformation of the input and am just writing back the input.
My initial attempt was to write within the end event handler:
const net = require('net');
const server = net.createServer(async client => {
let data = '';
client.on('end', () => {
client.write(data);
});
client.on('data', part => {
data += part.toString();
});
client.pipe(client);
});
server.listen(8124);
But this results in a Socket.writeAfterFIN error "This socket has been ended by the other party" which led me to enabling allowHalfOpen because the docs seem to indicate that it separates the incoming and outgoing FIN packets.
const net = require('net');
const drain = client =>
new Promise(resolve => {
let data = '';
client.on('end', () => {
console.log('end');
resolve(data);
});
client.on('data', part => {
console.log('data');
data += part.toString();
});
});
const server = net.createServer({ allowHalfOpen: true }, async client => {
const req = await drain(client);
client.end(req);
});
server.listen(8124);
This works when I use e.g. echo 'abc' | nc localhost 8124, but I'm not sure whether allowHalfOpen should be necessary here. Is there another way to write shortly after the end of the input stream?
Using netcat instead of curl resolves the issue. e.g. echo 'abc' | nc localhost 8124. This is also more in line with what I need to do anyway since I don't need HTTP for this server.
Code
var websock = net.createServer(function(sock) {
sock.pipe(sock);
sock.setEncoding('utf8');
sock.setKeepAlive(true);
sock.on("data", function(d) {
console.log("websock", d);
});
sock.on('end', function() {
console.log('websock disconnected');
});
});
websock.listen(777, '127.0.0.1');
After few minutes ~15 mins the callback code in sock.on("data", function() {}) seems not to be working. why is it the case? I checked the console.log, there is no log with a string "websock disconnected".
if the socket is not disconnected and if there is no error, what has happened to the socket connection or the data stream?
On the other end, (Server side, data sender) seems to be streaming data continuously while client side (nodejs app) has stopped receiving data.
The issue arises from your use of the pipe mechanism to echo back data which is never consumed on the original side (communication is unidirectional):
sock.pipe(sock);
This makes your code work as an echo server. Your socket "sock" is a duplex stream (i.e. both readable - for the incoming data you receive, and writable - for outgoing data you send back).
A quick fix if you don't need to respond back and you just need to receive data is to simply delete the "sock.pipe(sock);" line. To find out the explanation, read ahead.
Most probably your data source (the MT5 application you mentioned) sends data continuously and it doesn't read what you send back at all. So, your code keeps echoing back the received data using sock.pipe(sock), filling the outgoing buffer which is never consumed. However, the pipe mechanism of Nodejs streams handles backpressure, which means that when two streams (a readable and a writable one) are connected by a pipe, if the outgoing buffer is filling (reaching a high watermark), the readable stream is paused, to prevent the "overflow" of the writable stream.
You can read more about backpressure in the Nodejs docs. This fragment particularly describes how streams are handling backpressure:
In Node.js the source is a Readable stream and the consumer is the Writable stream [...]
The moment that backpressure is triggered can be narrowed exactly to the return value of a Writable's .write() function. [...]
In any scenario where the data buffer has exceeded the highWaterMark or the write queue is currently busy, .write() will return false.
When a false value is returned, the backpressure system kicks in. It will pause the incoming Readable stream from sending any data and wait until the consumer is ready again.
Below you can find my setup to show where backpressure kicks in; there are two files, server.js and client.js. If you run them both, server will write to console "BACKPRESSURE" soon. As the server is not handling backpressure (it ignores that sock.write starts returning false at some point), the outgoing buffer is filled and filled, consuming more memory, while in your scenario, socket.pipe was handling backpressure and thus it paused the flow of the incoming messages.
The server:
// ----------------------------------------
// server.js
var net = require('net');
var server = net.createServer(function (socket) {
console.log('new connection');
// socket.pipe(socket); // replaced with socket.write on each 'data' event
socket.setEncoding('utf8');
socket.setKeepAlive(true);
socket.on("data", function (d) {
console.log("received: ", d);
var result = socket.write(d);
console.log(result ? 'write ok' : 'BACKPRESSURE');
});
socket.on('error', function (err) {
console.log('client error:', err);
});
socket.on('end', function () {
console.log('client disconnected');
});
});
server.listen(10777, '127.0.0.1', () => {
console.log('server listening...');
});
The client:
// ----------------------------------------
// client.js
var net = require('net');
var client = net.createConnection(10777, () => {
console.log('connected to server!' + new Date().toISOString());
var count = 1;
var date;
while(count < 35000) {
count++;
date = new Date().toISOString() + '_' + count;
console.log('sending: ', date);
client.write(date + '\n');
}
});
client.on('data', (data) => {
console.log('received:', data.toString());
});
client.on('end', () => {
console.log('disconnected from server');
});
What i tried to achieve with node.js/io.js, is to send a file from one server to another one via a proxy. To avoid memory buffering i want to use streams.
The proxy should be able to connect to multiple targets dynamically. The target connection information for the proxy should be send prior to the filedata.
With normal socket communication and buffering it is not a problem. But how or in general can this be done with streams??
var net = require('net');
var fs = require('fs');
//create readstream from file
var myFile = fs.createReadStream('E:/sample.tar.gz');
// Proxy server
//####################################################################################################
var proxy = net.createServer(function (socket) {
// Create a new connection to the TCP server
var client = net.connect('9010');
// 2-way pipe between client and TCP server
socket.pipe(client).pipe(socket);
}).listen(9000);
// Targetserver
//####################################################################################################
var server = net.createServer(function (socket) {
// create filestream to write data into file
var destfile = fs.createWriteStream('E:/sample_copy.tar.gz')
socket.on('data', function (buffer) {
console.log('Get data on targetserver...');
// write buffer to file
destfile.write(buffer);
});
socket.on('end', function () {
// release file from writestream
destfile.end();
});
}).listen(9010);
// Client
//####################################################################################################
// Send file to proxy
var client = new net.Socket();
// connect to proxy
client.connect('9000', '127.0.0.1', function () {
console.log('Connection to proxy opened');
});
// send data to proxy
myFile.pipe(client);
// read response from taget
client.on('data', function(data) {
console.log('Response: ' + data);
// close the client socket completely
client.destroy();
});
// Add a 'close' event handler for the client socket
client.on('close', function() {
console.log('Connection to proxy closed');
});
Any hint to a good tutorial is also welcome.
TMOE
socket.write() already uses streams under the hood so you don't need to do anything special. Just send it the usual Buffer object or string and it will use a stream.
From the current source code of io.js, here's what happens when you use socket.write():
Socket.prototype.write = function(chunk, encoding, cb) {
if (typeof chunk !== 'string' && !(chunk instanceof Buffer))
throw new TypeError('invalid data');
return stream.Duplex.prototype.write.apply(this, arguments);
};
And stream is declared like this:
const stream = require('stream');
Apologies if I've misunderstood your question/requirements! By all means, clarify if I have misunderstood you and I'll try again (or delete this answer so it's not a distraction).
In node, when you create a socket server and connect to it with a client, the write function triggers the data event, but it seems there is no way to distinguish the source of the traffic (other than adding your own IDs/headers to each sent buffer).
For example, this is the output "server says hello" from the server.write, and then all of the "n client msg" are from client.write, and they all come out in on('data', fn):
➜ sockets node client.js
client connected to server!
client data: server says hello
client data: 1 client msg!
client data: 2 client msg!
client data: 3 client msg!
client data: 4 client msg!
Is there a correct way to distinguish the source of the data on a socket?
The code for a simple client:
// client.js
var net = require('net');
var split = require('split');
var client = net.connect({
port: 8124
}, function() {
//'connect' listener
console.log('client connected to server!');
client.write('1 client msg!\r\n');
client.write('2 client msg!\r\n');
client.write('3 client msg!\r\n');
client.write('4 client msg!\r\n');
});
client.on('end', function() {
console.log('client disconnected from server');
});
var stream = client.pipe(split());
stream.on('data', function(data) {
console.log("client data: " + data.toString());
});
and the code for the server
// server.js
var net = require('net');
var split = require('split');
var server = net.createServer(function(c) { //'connection' listener
console.log('client connected');
c.on('end', function() {
console.log('client disconnected');
});
c.write('server says hello\r\n');
c.pipe(c);
var stream = c.pipe(split());
stream.on('data', function(data) {
console.log("client data: " + data.toString());
});
});
server.listen(8124, function() { //'listening' listener
console.log('server bound');
});
The source of the traffic is the server.
If you're wanting to know whether it's data being echoed back to the client by the server, you will have to come up with your own protocol for denoting that.
For example, the server could respond with newline-delimited JSON data that is prefixed by a special byte that indicates whether it's an echo or an "original" response (or any other kind of "type" value you want to have). Then the client reads a line in, checks the first byte value to know if it's an echo or not, then JSON.parse()s the rest of the line after the first byte.
You can distinguish each client with:
c.name = c.remoteAddress + ":" + c.remotePort;
c.on('data', function(data) {
console.log('data ' + data + ' from ' + c.name);
});
Can someone give me a working example on how to create a server listening on the stream and reply when there is a request coming through.
Here's what I have so far:
var port = 4567,
net = require('net');
var sockets = [];
function receiveData(socket, data) {
console.log(data);
var dataToReplyWith = ' ... ';
// ... HERE I need to reply somehow with something to the client that sent the initial data
}
function closeSocket(socket) {
var i = sockets.indexOf(socket);
if (i != -1) {
sockets.splice(i, 1);
}
}
var server = net.createServer(function (socket) {
console.log('Connection ... ');
sockets.push(socket);
socket.setEncoding('utf8');
socket.on('data', function(data) {
receiveData(socket, data);
})
socket.on('end', function() {
closeSocket(socket);
});
}).listen(port);
Will the socket.write(dataToReplyWith); do it?
Yes, you can just write to the socket whenever (as long as the socket is still writable of course). However the problem you may run into is that one data event may not imply a complete "request." Since TCP is just a stream, you can get any amount of data which may not align along your protocol message boundaries. So you could get half a "request" on one data event and then the other half on another, or the opposite: multiple "requests" in a single data event.
If this is your own custom client/server design and you do not already have some sort of protocol in place to determine "request"/"response" boundaries, you should incorporate that now.