Node stream pipeline stops prematurely - node.js

Think of this piece of code as a relay between a NAT-ted service (input) and an external service (output) that wants to communicate with the input.
This relay is on a public server and opens two ports in order to relay:
port 4040 where input connects and forwards the TCP traffic from target service
port 4041 where some external client connects to the relay
The relay should pipe what it receives from input on port 4040 to the external client on port 4041.
I can see both services connecting to the relay but the data flow just stops after, what I suspect the output socket closing. In the following example I used a stream.pipeline but I also tried with simple .pipe directly on the sockets with same results
import net from "net"
import stream from "stream";
export default () => {
const inputServer = net.createServer();
const outputServer = net.createServer();
inputServer.listen(4040, "0.0.0.0", () => {
console.log('TCP Server is running on port ' + 4040 + '.');
});
outputServer.listen(4041, "0.0.0.0", () => {
console.log('TCP Server is running on port ' + 4041 + '.');
});
let inSocket = null;
inputServer.on('connection', (sock) => {
inSocket = sock;
});
outputServer.on('connection', (sock) => {
if (inSocket) {
stream.pipeline(inSocket, sock, (err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
})
stream.pipeline(sock, inSocket, (err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
})
}
});
}
My goal is to keep an open socket to the input service and relay with any output will connect.

data flow just stops after, what I suspect the output socket closing
pipeline or .pipe() will automatically close the output stream when the input stream ends so it won't keep your input stream open for successive input stream connections.
Using .pipe(), you can override that behavior by passing an option:
inSocket.pipe(sock, {end: false});
and
sock.pipe(inSocket, {end: false});
You will then need some separate error handling for each stream as .pipe() doesn't do as complete error handling as pipeline() does.
In this way, you let each stream close itself when the client chooses rather than having the server close it when a given streaming operation is complete.
I don't see a similar option for pipeline().
I'm also curious how you plan to use this. Do you intend for there to be one long connected inSocket and many separate connections to the outputServer? Do you intend or need more than one outputServer connection at once? Or just one at at time. Since you're not auto destroying, I wonder if you need to do some manual cleanup (unpiping, for example) when any socket disconnects? .pipe() is also famous for not unwinding all its event listeners which can sometimes lead to GC issues if you don't manually clean up things properly.

Related

How to properly destroy a socket so that it is ready to be called again after program exits? (Node.js)

He is a bit of code where a server is created to listen on port 2222:
import { createServer } from 'net';
const server = createServer((c) => {
c.setEncoding('utf8');
c.on('data', (data) => {
console.log('server', data);
c.write(data);
});
c.on('error', (e) => { throw e; });
});
server.listen(2222);
and the code to create a connection to the server to send a simple 'hello' that the server will respond back to. After 2 seconds, the socket gets destroyed.
import { createConnection } from 'net';
const socket = createConnection({ localPort: 9999, port: 2222, host: 'localhost' });
socket.setEncoding('utf8');
socket.on('data', (data) => {
console.log('socket data', data);
});
socket.on('connect', () => {
socket.write('hello');
});
socket.setTimeout(2000);
socket.on('timeout', () => { socket.destroy(); console.log('destroyed'); });
socket.on('error', (e) => { throw e; });
This code works well the first time it is called.
It will fail on subsequent calls with:
Error: connect EADDRINUSE 127.0.0.1:2222 - Local (0.0.0.0:9999)
errno: -48,
code: 'EADDRINUSE',
syscall: 'connect',
address: '127.0.0.1',
port: 2222
It took me a while to figure it out, but the problem comes from trying to bind the socket on an outbound port: localPort: 9999. Without specifying this parameter, the OS will select a free port, and the program won't crash.
When specifying it, a cooldown of ~15s is required before the socket being re-usable again.
Is there a way to properly destroy the socket so that it becomes immediately available again?
If not, is there a way to verify that the socket is "cooling down", but will eventually be available again? I'd like to distinguish the case where I just have to wait from the one where the socket has been actively taken by another process, and won't be released to the pool of free sockets.
Any theory on why the socket is not available after the program exists is welcome!
The socket is going into a CLOSE_WAIT state, so you have to wait until it is available again before you can reconnect. You can try to avoid this by:
Removing the source socket and letting the platform pick a random ephemeral one for you (as you have already found out).
Closing the connection at the server end first (so the CLOSE_WAIT ends up there). See server.close();
Resetting the client connection. See socket.resetAndDestroy()
Why are you specifying a local port in the first place? You almost never want to do that. If you don't, it'll work.
Any theory on why the socket is not available after the program exists is welcome!
The OS keeps the port in use for a while to be able to receive packets and tell the sender that the socket is gone.
Is there a way to properly destroy the socket so that it becomes immediately available again?
You can set the "linger" socket option to 0, so it'll become available immediately again, but again, you shouldn't get yourself in this situation in the first time. Consider whether you want to specify the local port. You usually don't.
is there a way to verify that the socket is "cooling down"
It'll be in the CLOSE_WAIT state.

Acting as a modbus slave through RTU - communication questions

I am using the following code to simulate a modbus slave device:
const modbus = require('jsmodbus')
const SerialPort = require('serialport')
const options = {
baudRate: 115200
}
const socket = new SerialPort("COM4", options)
const server = new modbus.server.RTU(socket)
server.on('connect', function (client) {
console.log("in connect")
console.log(client);
});
server.on('connection', function (client) {
console.log("in connection")
console.log(client);
});
server.on('readHoldingRegisters', function (adr, len) {
console.log("in readHoldingRegisters")
console.log("adr: " + adr);
console.log("len: " + len);
});
The code above does actually simulate a device. The master I have set up can see a slave device when I run this code. The problem is that I can't seem to get the server functions to reach their console.log sections.
I have two theories.
First, my slave device uses the jsmodbus library to simulate a server and my master device uses modbus-serial to communicate. Could this cause a problem?
My second theory is that the code I have above is running all at once and doesn't look around or stay open to see future communications. How would I solve this?
I am open to new theories as well. My goal is eventually to pass modbus data back to the master through the server.on commands.
Edit
I know for sure there is data coming in if I read the data on the serial port directly:
socket.on('readable', function () {
console.log('Data:', socket.read()) //prints incoming buffer data
});
I still am not getting data through the server commands.

Is it possible to keep socket.io connections alive after restarting Nodejs server?

I wrote an online multiplayer turn based game which uses socket.io as transport mechanism.
But the problem is that when I want to change some code in application it must be restarted and this causes socket connections disconnect and reconnect so the game flow including timers, state & ...
will be corrupt.
How can I handle this situation?
I tried to write a reset handler module:
http.listen(port, () => {
const argv = process.argv.slice(2)
if (argv[0] === 'platform-Master') {
const resetHandler = require('./reset-handler')()
setTimeout(async () => {
await resetHandler.findOpenGames()
}, 3000)
}
})
But it only works on single process not a cluster or at scale.

How to handle CONTROL+C in node.js TCP server

how do I handle the CONTROL+C input in a node.js TCP server?
var server = net.createServer(function(c) {
c.on('end', function() {
console.log('Client disconnected');
});
c.on('data', function(data) {
if (data == "CONTROL+C") { // Here is the check
c.destroy();
}
});
}).listen(8124);
Control-C is a single byte, 0x03 (using an ASCII chart is kinda helpful).
However, whenever you're dealing with a socket connection you have to remember that you're going to receive data in a "chunked" fashion and the chunking does not necessarily correspond to the way the data was sent; you cannot assume that one send call on the client side corresponds to a single chunk on the server side. Therefore you can't assume that if the client sends a Control-C, it will be the only thing you receive in your data event. Some other data might come before it, and some other data might come after it, all in the same event. You will have to look for it inside your data.
From ebohlman's answer. It work.
c.on('data', function(data) {
if (data.toString().charCodeAt(0) === 3) {
c.destroy();
}
});

NodeJS socket.io-client doesn't fire 'disconnect' or 'close' events when the server is killed

I've written up a minimal example of this. The code is posted here: https://gist.github.com/1524725
I start my server, start my client, verify that the connection between the two is successful, and finally kill the server with CTRL+C. When the server dies, the client immediately runs to completion and closes without printing the message in either on_client_close or on_client_disconnect. There is no perceptible delay.
From the reading I've done, because the client process is terminating normally there isn't any chance that the STDOUT buffer isn't being flushed.
It may also be worth noting that when I kill the client instead of the server, the server responds as expected, firing the on_ws_disconnect function and removing the client connection from its list of active clients.
32-bit Ubuntu 11.10
Socket.io v0.8.7
Socket.io-client v0.8.7
NodeJS v0.6.0
Thanks!
--- EDIT ---
Please note that both the client and the server are Node.js processes rather than the conventional web browser client and node.js server.
NEW ANSWER
Definitely a bug in io-client. :(
I was able to fix this by modifying socket.io-client/libs/socket.js. Around line 433, I simply moved the this.publish('disconnect', reason); above if (wasConnected) {.
Socket.prototype.onDisconnect = function (reason) {
var wasConnected = this.connected;
this.publish('disconnect', reason);
this.connected = false;
this.connecting = false;
this.open = false;
if (wasConnected) {
this.transport.close();
this.transport.clearTimeouts();
After pressing ctrl+c, the disconnect message fires in roughly ten seconds.
OLD DISCUSSION
To notify client of shutdown events, you would add something like this to demo_server.js:
var logger = io.log;
process.on('uncaughtException', function (err) {
if( io && io.socket ) {
io.socket.broadcast.send({type: 'error', msg: err.toString(), stack: err.stack});
}
logger.error(err);
logger.error(err.stack);
//todo should we have some default resetting (restart server?)
app.close();
process.exit(-1);
});
process.on('SIGHUP', function () {
logger.error('Got SIGHUP signal.');
if( io && io.socket ) {
io.socket.broadcast.send({type: 'error', msg: 'server disconnected with SIGHUP'});
}
//todo what happens on a sighup??
//todo if you're using upstart, just call restart node demo_server.js
});
process.on('SIGTERM', function() {
logger.error('Shutting down.');
if( io && io.socket ) {
io.socket.broadcast.send({type: 'error', msg: 'server disconnected with SIGTERM'});
}
app.close();
process.exit(-1);
});
Of course, what you send in the broadcast.send(...) (or even which command you use there) depends on your preference and client structure.
For the client side, you can tell if the server connection is lost using on('disconnect', ...), which you have in your example:
client.on('disconnect', function(data) {
alert('disconnected from server; reconnecting...');
// and so on...
});

Resources