I have made a simple server and client program where the server reads the data from file and send to the client through TCP socket But the data I am getting is in object and not a simple string ?
So why I cant see the data as plaintext as it is in my data.txt file.
Explanation with example would be appreciated.
Here is my code :-
SERVER CODE
const fs = require('fs');
const net = require('net');
const readableData = fs.createReadStream('data.txt', 'utf8');
const server = net.createServer(socket => {
socket.on('data', chunk => {
console.log(chunk.toString());
socket.write(JSON.stringify(readableData));
});
socket.on('end', () => {
console.log("done");
})
socket.on('close', () => {
console.log("closed")
})
});
server.listen(3000);
CLIENT CODE
const fs = require('fs');
const net = require('net');
const client = new net.Socket();
client.connect('3000', () => {
console.log("connected");
client.write("Server please send the data");
});
client.on('data', chunk => {
console.log("Data recieved:" + chunk.toString());
});
client.on('finish', () => {
console.log("Work completed");
})
client.on('close', () => {
console.log("connection closed");
})
And here is my data.txt file which has simple data
Hello client how are you ?
And the output I'm getting is here :-
Data recieved:{"_readableState":{"objectMode":false,"highWaterMark":65536,"buffer":{"head":{"data":"Hello client how are you ?","next":null},"tail":{"data":"Hello client how are you ?","next":null},"length":1},"length":26,"pipes":null,"pipesCount":0,"flowing":null,"ended":true,"endEmitted":false,"reading":false,"sync":false,"needReadable":false,"emittedReadable":false,"readableListening":false,"resumeScheduled":false,"paused":true,"emitClose":false,"autoDestroy":false,"destroyed":false,"defaultEncoding":"utf8","awaitDrain":0,"readingMore":false,"decoder":{"encoding":"utf8"},"encoding":"utf8"},"readable":true,"_events":{},"_eventsCount":1,"path":"data.txt","fd":35,"flags":"r","mode":438,"end":null,"autoClose":true,"bytesRead":26,"closed":false}
The question why I won't be able to see the data as plaintext on client side as it is in data.txt file.
Your variable readableData contains a node.js stream object. That's what that variable is. It's only of use in the current node.js instance so it doesn't do anything useful to try to send that stream object to the client.
If you want to get all the data from that 'data.txt' file, you have several choices.
You can just read the whole file into a local variable with fs.readFile() and then send all that data with socket.write().
You can create a new stream attached to the file for each new incoming request and then as the data comes in on the readStream, you can send it out on the socket (this is often referred to as piping one stream into another). If you use higher level server constructs such as an http server, they make piping real easy.
Option #1 would look like this:
const server = net.createServer(socket => {
socket.on('data', chunk => {
console.log(chunk.toString());
fs.readFile('data.txt', 'utf8', (err, data) => {
if (err) {
// insert error handling here
console.log(err);
} else {
socket.write(data);
}
});
});
socket.on('end', () => {
console.log("done");
})
socket.on('close', () => {
console.log("closed")
})
});
FYI, you should also know that socket.on('data', chunk => {...}) can give you any size chunk of data. TCP streams do not make any guarantees about delivering the exact same chunks of data in the same pieces that they were originally sent in. They will come in order, but if you sent three 1k chunks from the other end, they might arrive as three separate 1k chunks, they might arrive as one 3k chunk or they might arrive as a whole bunch of much smaller chunks. How they arrive will often depend upon what intermediate transports and routers they had to travel over and if there were any recoverable issues along that transmission. For example, data sent over a satellite internet connection will probably arrive in small chunks because the needs of the transport broke it up into smaller pieces.
This means that reading any data over a plain TCP connection generally needs some sort of protocol so that the reader knows when they've gotten a full, meaningful chunk that they can process. If the data is plain text, it might be as simple a protocol as every message ends with a line feed character. But, if the data is more complex, then the protocol may need to be more complex.
Related
I'm trying to write a very simple node TCP server which reads in the full input stream and writes out some function of the input. The output cannot be generated without the full input so the writes cannot be streamed as the input comes in. For simplicity sake in this post, I have omitted the actual transformation of the input and am just writing back the input.
My initial attempt was to write within the end event handler:
const net = require('net');
const server = net.createServer(async client => {
let data = '';
client.on('end', () => {
client.write(data);
});
client.on('data', part => {
data += part.toString();
});
client.pipe(client);
});
server.listen(8124);
But this results in a Socket.writeAfterFIN error "This socket has been ended by the other party" which led me to enabling allowHalfOpen because the docs seem to indicate that it separates the incoming and outgoing FIN packets.
const net = require('net');
const drain = client =>
new Promise(resolve => {
let data = '';
client.on('end', () => {
console.log('end');
resolve(data);
});
client.on('data', part => {
console.log('data');
data += part.toString();
});
});
const server = net.createServer({ allowHalfOpen: true }, async client => {
const req = await drain(client);
client.end(req);
});
server.listen(8124);
This works when I use e.g. echo 'abc' | nc localhost 8124, but I'm not sure whether allowHalfOpen should be necessary here. Is there another way to write shortly after the end of the input stream?
Using netcat instead of curl resolves the issue. e.g. echo 'abc' | nc localhost 8124. This is also more in line with what I need to do anyway since I don't need HTTP for this server.
Code
var websock = net.createServer(function(sock) {
sock.pipe(sock);
sock.setEncoding('utf8');
sock.setKeepAlive(true);
sock.on("data", function(d) {
console.log("websock", d);
});
sock.on('end', function() {
console.log('websock disconnected');
});
});
websock.listen(777, '127.0.0.1');
After few minutes ~15 mins the callback code in sock.on("data", function() {}) seems not to be working. why is it the case? I checked the console.log, there is no log with a string "websock disconnected".
if the socket is not disconnected and if there is no error, what has happened to the socket connection or the data stream?
On the other end, (Server side, data sender) seems to be streaming data continuously while client side (nodejs app) has stopped receiving data.
The issue arises from your use of the pipe mechanism to echo back data which is never consumed on the original side (communication is unidirectional):
sock.pipe(sock);
This makes your code work as an echo server. Your socket "sock" is a duplex stream (i.e. both readable - for the incoming data you receive, and writable - for outgoing data you send back).
A quick fix if you don't need to respond back and you just need to receive data is to simply delete the "sock.pipe(sock);" line. To find out the explanation, read ahead.
Most probably your data source (the MT5 application you mentioned) sends data continuously and it doesn't read what you send back at all. So, your code keeps echoing back the received data using sock.pipe(sock), filling the outgoing buffer which is never consumed. However, the pipe mechanism of Nodejs streams handles backpressure, which means that when two streams (a readable and a writable one) are connected by a pipe, if the outgoing buffer is filling (reaching a high watermark), the readable stream is paused, to prevent the "overflow" of the writable stream.
You can read more about backpressure in the Nodejs docs. This fragment particularly describes how streams are handling backpressure:
In Node.js the source is a Readable stream and the consumer is the Writable stream [...]
The moment that backpressure is triggered can be narrowed exactly to the return value of a Writable's .write() function. [...]
In any scenario where the data buffer has exceeded the highWaterMark or the write queue is currently busy, .write() will return false.
When a false value is returned, the backpressure system kicks in. It will pause the incoming Readable stream from sending any data and wait until the consumer is ready again.
Below you can find my setup to show where backpressure kicks in; there are two files, server.js and client.js. If you run them both, server will write to console "BACKPRESSURE" soon. As the server is not handling backpressure (it ignores that sock.write starts returning false at some point), the outgoing buffer is filled and filled, consuming more memory, while in your scenario, socket.pipe was handling backpressure and thus it paused the flow of the incoming messages.
The server:
// ----------------------------------------
// server.js
var net = require('net');
var server = net.createServer(function (socket) {
console.log('new connection');
// socket.pipe(socket); // replaced with socket.write on each 'data' event
socket.setEncoding('utf8');
socket.setKeepAlive(true);
socket.on("data", function (d) {
console.log("received: ", d);
var result = socket.write(d);
console.log(result ? 'write ok' : 'BACKPRESSURE');
});
socket.on('error', function (err) {
console.log('client error:', err);
});
socket.on('end', function () {
console.log('client disconnected');
});
});
server.listen(10777, '127.0.0.1', () => {
console.log('server listening...');
});
The client:
// ----------------------------------------
// client.js
var net = require('net');
var client = net.createConnection(10777, () => {
console.log('connected to server!' + new Date().toISOString());
var count = 1;
var date;
while(count < 35000) {
count++;
date = new Date().toISOString() + '_' + count;
console.log('sending: ', date);
client.write(date + '\n');
}
});
client.on('data', (data) => {
console.log('received:', data.toString());
});
client.on('end', () => {
console.log('disconnected from server');
});
I have a get request in node that successfully receives data from API.
When I pipe that response directly to a file like this, it works, the file created is a valid, readable pdf (as i expect to receive from the API).
var http = require('request');
var fs = require('fs');
http.get(
{
url:'',
headers:{}
})
.pipe(fs.createWriteStream('./report.pdf'));
Simple, however the file gets corrupted if I use the event emitters of the request like this
http.get(
{
url:'',
headers:{}
})
.on('error', function (err) {
console.log(err);
})
.on('data', function(data) {
file += data;
})
.on('end', function() {
var stream = fs.createWriteStream('./report.pdf');
stream.write(file, function() {
stream.end();
});
});
I have tried all manner of writing this file and it always ends in a totally blank pdf - the only time the pdf is valid is via the pipe method.
When i console log the events, the sequence seems to be correct - ie, all chunks received and then the end fires at the end.
It's making things impossible to do anything after the pipe. What is pipe doing differently to the writestream ?
I assume that you initialize file as a string:
var file = '';
Then, in your data handler, you add the new chunk of data to it:
file += data;
However, this performs an implicit conversion to (UTF-8-encoded) strings. If the data is actually binary, like with a PDF, this will invalidate the output data.
Instead, you want to collect the data chunks, which are Buffer instances, and use Buffer.concat() to concatenate all those buffers into one large (binary) buffer:
var file = [];
...
.on('data', function(data) {
file.push(data);
})
.on('end', function() {
file = Buffer.concat(file);
...
});
If you wanted to do something after the file is done being written by pipe, you can add an event listener for finish on the object returned by pipe.
.pipe(fs.createWriteStream('./report.pdf'))
.on('finish', function done() { /* the file has been written */ });
Source: https://nodejs.org/api/stream.html#stream_event_finish
I would like to receive binary data (like .pdf or .doc) from a tcp socket, here is the code :
To send the file :
fs.readFile(path, function (err, data) {
var client = new net.Socket();
client.connect(user_port, user_ip, function () {
client.write(data, 'binary');
client.destroy();
});
});
To receive the file :
net.createServer(function(socket){
socket.setEncoding('binary');
socket.on('data', function (data) {
var file_data = new Buffer(data, 'binary');
fs.appendFile(utils.getUserDir() + '/my_file.doc', file_data);
});
socket.on('error', function(err){
console.log(err.message);
})
}).listen(utils.getPort(), utils.getExternalIp());
As the files are too big for only one tcp socket, they are sent with multiples packets, in fact there is multiple 'data' event for the same file.
I thought it was possible to append each Buffer() data to a file, but when I open the .doc, he is corrupted or has binary things in it.
PS: I can't use Buffer().concat and save the file after since I don't know which packet is the last one...
Thank you
For sending files like this, it's better to stream them instead of buffer it all into memory and then sending it. (Also, you don't need the 'binary' encoding argument since fs.readFile() gives you a Buffer by default)
For example:
var client = new net.Socket();
client.connect(user_port, user_ip, function() {
fs.createReadStream(path).pipe(client);
});
// ...
net.createServer(function(socket){
socket.pipe(fs.createWriteStream(utils.getUserDir() + '/my_file.doc'));
socket.on('error', function(err){
console.log(err.message);
});
}).listen(utils.getPort(), utils.getExternalIp());
Can someone give me a working example on how to create a server listening on the stream and reply when there is a request coming through.
Here's what I have so far:
var port = 4567,
net = require('net');
var sockets = [];
function receiveData(socket, data) {
console.log(data);
var dataToReplyWith = ' ... ';
// ... HERE I need to reply somehow with something to the client that sent the initial data
}
function closeSocket(socket) {
var i = sockets.indexOf(socket);
if (i != -1) {
sockets.splice(i, 1);
}
}
var server = net.createServer(function (socket) {
console.log('Connection ... ');
sockets.push(socket);
socket.setEncoding('utf8');
socket.on('data', function(data) {
receiveData(socket, data);
})
socket.on('end', function() {
closeSocket(socket);
});
}).listen(port);
Will the socket.write(dataToReplyWith); do it?
Yes, you can just write to the socket whenever (as long as the socket is still writable of course). However the problem you may run into is that one data event may not imply a complete "request." Since TCP is just a stream, you can get any amount of data which may not align along your protocol message boundaries. So you could get half a "request" on one data event and then the other half on another, or the opposite: multiple "requests" in a single data event.
If this is your own custom client/server design and you do not already have some sort of protocol in place to determine "request"/"response" boundaries, you should incorporate that now.