I have a node.js server application that listens to raw TCP connections using the net module.
In order to keep track of network latency, I want to access the TSval and TSecr TCP fields. How exactly should it be done and where? The server layout is nothing spectacular:
var tcps = net.createServer(function(c) {
/* socket initialization */
...
c.on('data', function(chunk) {
/* message processing */
...
}
}
tcps.listen(serverPort);
See #mscdex's comment:
That kind of low-level access to TCP packets is not available in node.js
Related
I'm trying communicate between a Node.js socket (I am using zmq) and an external tcp socket via ZMQ.
The external socket (tcp://***.***.***.***:5555) is a part of a C++ service, and he acts like a dealer.
The Node.js server should act like a router, which should monitor and pass messages to the available workers (that should be received from the external tcp socket).
The connection is successfully made between both services, but once I am connected to the tcp socket, I don't receive any message from the external service.
# Node.js server
let zmq = require('zmq');
socket = zmq.socket('router');
// Successfully connected
socket.on('connect', () => {console.log('Connected!')});
// No message received from tcp
socket.on('message', (message) => {console.log('Message: ', message)});
socket.monitor(500, 0);
socket.connect('tcp://***.***.***.***:5555');
Any thought will be very welcomed!
I have a device in my local that sends JSON data through TCP:
ex. {"cmd": "listactions", "id": 13, "value": 100}
The data is send from 192.168.0.233 on port 8000
(I've checked this with wireshark)
I would like to (with my homeserver) intercept those specific commands with nodejs & send them through with pusher.
I have very little experience with nodejs & can't seem to get this working. I tried the following:
net = require('net');
var server = net.createServer(function(sock) {
sock.setEncoding('utf8');
sock.on('data', function (data) {
// post data to a server so it can be saved and stuff
console.log(data);
// close connection
sock.end();
});
sock.on('error', function (error) {
console.log('******* ERROR ' + error + ' *******');
// close connection
sock.end();
});
});
server.on('error', function (e) {
console.log(e);
});
server.listen(8000, '192.168.0.233', function(){
//success, listening
console.log('Server listening');
});
But it complaints about EADDRNOTAVAIL, from research I've learned that this means the port is in use... Which I don't understand because I'm able to with Wireshark.
What's the correct way to do this?
Wireshark doesn't listen on all ports. It uses a kernel driver (with libpcap or winpcap) to "sniff" data off the wire. That is, it doesn't sit in between anything. It's not a man-in-the-middle. It just receives a copy of data as it is sent along.
Your application needs to listen on that port if you expect it to receive data. You should use netstat to figure out what process is listening on port 8000 and kill it. Then, you can listen on port 8000.
You could in theory use libpcap to get your data but that's crazy. There is a ton of overhead, something has to listen on that port anyway (it's a TCP connection after all), and it's extremely complex to filter out what you want. Basically, you would be reimplementing the network stack. Don't.
Any way to check if TCP socket in NodeJS has been connected.
As opposed to still connecting in transient.
Am looking for a property as opposed to an emitted event.
Is it even possible?
The manual shows you that an event is emitted for when the connection is established and that is how you should handle it the Node.js way (using non-blocking, asynchronous i/o)
There is though an undocumented way to check it, the socket object has a _connecting boolean property that if set to false means it has been connected already or failed.
Example :
var net = require('net');
//Connect to localhost:80
var socket = net.createConnection(80);
if(socket._connecting === true) {
console.log("The TCP connection is not established yet");
}
I'm testing communication between two NodeJS instances over TCP, using the net module.
Since the TCP doesn't rely on messages (socket.write()), I'm wrapping each message in a string like msg "{ json: 'encoded' }"; in order to handle them individually (otherwise, I'd receive packets with a random number of concatenated messages).
I'm running two NodeJS instances (server and client) on a CentOS 6.5 VirtualBox VM with bridged network and a Core i3-based host machine. The test lies on the client emitting a request to the server and waiting for the response:
Client connects to the server.
Client outputs current timestamp (Date.now()).
Client emits n requests.
Server replies to n requests.
Client increments a counter on every response.
When finished, client outputs the current timestamp.
The code is quite simple:
Server
var net = require('net');
var server = net.createServer(function(socket) {
socket.setNoDelay(true);
socket.on('data', function(packet) {
// Split packet in messages.
var messages = packet.toString('utf-8').match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
// Emit response:
socket.write('msg "PONG";');
}
});
});
server.listen(9999);
Client
var net = require('net');
var WSClient = new net.Socket();
WSClient.setNoDelay(true);
WSClient.connect(9999, 'localhost', function() {
var req = 0;
var res = 0;
console.log('Start:', Date.now());
WSClient.on('data', function(packet) {
var messages = packet.toString("utf-8").match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
res++;
if (res === 1000) console.log('End:', Date.now());
}
});
// Emit requests:
for (req = 0; req <= 1000; req++) WSClient.write('msg "PING";');
});
My results are:
With 1 request: 9 - 24 ms
With 1000 requests: 478 - 512 ms
With 10000 requests: 5021 - 5246 ms
My pings (ICMP) to localhost are between 0.6 - 0.1 seconds. I've not intense network traffic or CPU usage (running SSH, FTP, Apache, Memcached, and Redis).
Is this normal for NodeJS and TCP or it is just my CentOS VM or my low-performance host? Should I move to another platform like Java or a native C/C++ server?
I think that a 15 ms delay (average) per request on localhost is not acceptable for my project.
Wrapping the messages in some text and searching for a Regex match isn't enough.
The net.Server and net.Socket interfaces have a raw TCP stream as an underlying data source. The data event will fire whenever the underlying TCP stream has data available.
The problem is, you don't control the TCP stack. The timing of it firing data events has nothing to do with the logic of your code. So you have no guarantee that the data event that drives your listeners has exactly one, less than one, more than one, or any number and some remainder, of messages being sent. In fact, you can pretty much guarantee that the underlying TCP stack WILL break up your data into chunks. And the listener only fires when a chunk is available. Your current code has no shared state between data events.
You only mention latency, but I expect if you check, you will also find that the count of messages received (on both ends) is not what you expect. That's because any partial messages that make it across will be lost completely. If the TCP stream sends half a message at the end of chunk 1, and the remainder in chunk 2, the split message will be totally dropped.
The easy and robust way is to use a messaging protocol like ØMQ. You will need to use it on both endpoints. It takes care of framing the TCP stream into atomic messages.
If for some reason you will connecting to or receiving traffic from external sources, they will probably use something like a length header. Then what you want to do is create a Transform stream that buffers incoming traffic, and only emits data when the amount identified in the header has arrived.
Have you done any network dump? You may be creating network congestion due to the overhead introduced by enabling 'no delay' socket property. This property will send data down to TCP stack as soon as possible and if you have very small chunks of information it will lead to many TCP packets with small payloads, thus the decreasing transmission efficiency and eventually having TCP pausing the transmission due to congestion. If u want to use 'no delay' for your sockets, try increasing your receiving socket buffer so that data is pulled from the tcp stack faster. Let us know if that helped.
I am currently experiencing with Websockets.
By reviewing some active projects/implementations like einaros/ws (and others as well) I found out that they implement the server their own. Instead of using the node net module which provides a tcp server. Is there a reason for this approach?
https://github.com/einaros/ws/blob/master/lib/WebSocketServer.js
Regards
Update:
var server = net.createServer(function(c) {
c.on('data', function(data) {
// data is a websocket fragment which has to get parsed
});
// transformToSingleUtfFragment is building a websocket valid
// byte fragment which contains hello as application payload
// and sets the right flags so the receiver knows we have a single text fragment
c.write(transformToSingleUtfFragment('hello'));
c.pipe(c);
});
server.listen(8124, function() { //'listening' listener
console.log('server bound');
});
WebSocket's a a protocol layered on top of normal HTTP.
How it works is basically that the browser sends a UPGRADE HTTP request and then makes use of the HTTP 1.1 keep alive functionality to keep the underlying TCP socket of the HTTP connection open.
The data is then send via the WebSocket Protocol (Rather large RFC behind the link), which itself is built on top of TCP.
Since the HTTP part is required, and you need to re-use the TCP connection from that one, it makes sense to go with the normal HTTP server instead of net.Server. Otherwise you'd had to implement the HTTP handling part yourself.
Implementing the WebSocket Protocol needs to be done in either case, and since any HTTP connection can be upgraded, you can, in theory, simply connect your WebSocket "server" to the normal HTTP Server on Port 80 and thus handle both normal HTTP requests and WebSockets on the same port.