Updating variable in nodejs - node.js

I wrote a simple server in node js.
var tls = require("tls"), fs = require("fs"), sys = require("sys");
//Number of messages received
var received=0;
var options = {
key: fs.readFileSync("certs/keys/server.key"),//Server private key
cert:fs.readFileSync("certs/certs/server.crt"),//Server cert.
requestCert: true,//Require client to send it's certificate
rejectUnauthorized:true,
ca:fs.readFileSync("certs/certs/userA.crt") //Root certificate,
};
//Server instance with connection request callback
var server = tls.createServer(options,function(socket){
//Add a listener for receiving data packets
socket.addListener("data", function(data){
received++;
});
}).listen(2195,function(){
console.log("Server started");
});
I also have java client application which makes multiple (300) connections to the server and sends messages. The problem is value of variable "received" does not match with the value of "send" on java side. For. example if I send 100,000 messages from java application, the server shows value of received as 80,000, even though all the messages are successfully received by the server.
I think the issue is variable received is updated by multiple callbacks at the same time and hence the final value is getting messed up. Any idea on how i can get this resolved?

TCP/IP does not guarantee that the number of packets sent matches the number of packets received. So it can happen that two or more consecutive sent packets get "combined" into one. (See -> Nagle's algorithm) or they get split (See -> IP fragmentation) if they dont fit into the MTU.

Related

How to prevent socket io client (browser) from being overflowed with huge payload coming from server?

I have a React.js client app and a Node.js server app and the Node.js app receives json data in real time via socket.io from another microservice. The JSON data is sent very often and this breaks the client app. For example:
I stop the server but the client still receives data
If I try refreshing the browser, it takes a lot of time to refresh
It also used to disconnect and reconnect the sockets (I fixed this by increasing the pingTimeout but that did not solve the other problems)
I also used maxHttpBufferSize and updateTimeout by increasing them but that does not really help. Decreasing the maxHttpBufferSize stops the messages from being received but I want them to be received just in a manner which does not break my client application.
Any advices on what I can do to improve my situation?
EDIT:
It could also work if I do not send all messages but skip every second or so but I am not sure how to achieve this?
Backpressure can be implemented with acknowledgements:
the client notifies the server when it has successfully handled the packet
socket.on("my-event", (data, cb) => {
// do something with the data
// then call the callback
cb();
});
the server must wait for the acknowledgement before sending more packets
io.on("connection", (socket) => {
socket.emit("my-event", data, () => {
// the client has acknowledged the packet, we can continue
});
})
Reference: https://socket.io/docs/v4/emitting-events/#acknowledgements
Note: using volatile packets won't work here, because the amount of data buffered on the server is not taken in account
Reference: https://github.com/websockets/ws/blob/master/doc/ws.md#websocketbufferedamount

Two UDP binding operations to same port succeeds although expected to fail

I'm listening to UDP messages by binding to a specific port and address using node.js:
var dgram = require('dgram');
var socket = dgram.createSocket('udp4');
var messageNumber = 0;
socket.on('message', function(data) {
console.log('MESSAGE ' + (++messageNumber));
});
socket.bind(4353, '127.0.0.1');
The above code is able to receive messages, as I checked by the following sender code:
var dgram = require('dgram');
var socket = dgram.createSocket('udp4');
var address = '127.0.0.1';
setInterval(function() {
socket.send(new Buffer('hello'), 0, 5, 4353, address);
}, 1000);
Now I'm starting another receiver while the first one is still running (using the same code above). I would expect one of the following:
The second bind should fail (most probably).
The second bind should succeed and both will receive the messages.
According to the following discussion the first option is the correct assuming node.js not using SO_REUSEPORT by default:
Let two UDP-servers listen on the same port?
However what happens is that it doesn't fail, and only the last receiver gets the sender's messages. If I close the last receiver, the first one starts getting the messages again.
What's going on here?
EDIT: It seems that there are differences between machines. On Ubuntu machine I got the phenomenon described above. On Windows machine I got that the FIRST receiver gets the sender's messages (and again, when closing that receiver the other one starts receiving the messages). The common for both cases is that only one receiver receives the messages and the other one has no indication about that the port is in use (neither exception nor error - I tried also registering to "error" event).

Verify that the client has received the message from the server

In a chat service that I'm building, I need to send messages directly from the server.
I have found no solution, there is an example in the documentation:
// SERVER
io.sockets.on('connection', function (socket) {
socket.on('ferret', function (name, fn) {
fn('woot');
});
});
// CLIENT
socket.on('connect', function () { // TIP: you can avoid listening on `connect` and listen on events directly too!
socket.emit('ferret', 'tobi', function (data) {
console.log(data); // data will be 'woot'
});
});
but does the opposite of what I need!
I need to do the emit from the server and receive a confirmation of receipt from the client!
Is there a way to do this?
there is no guarantee as the connection can be killed before the servers message reaches the client. thus there is also no event on the server like "clientGotMessage". if a message MUST reach the user there is no other way than to tell the server that you received the message on the client.
You can do this 'easy' by sending a number down. client and server keep track of that number. each time the server sends, it counts up, each time the client receives, it counts up. When the client sends something, it sends the number, so the server will see if the client has everything. If the client missed a message, the next message will have a number that the client wont accept and request the lost message from the server.

NodeJS and TCP performance

I'm testing communication between two NodeJS instances over TCP, using the net module.
Since the TCP doesn't rely on messages (socket.write()), I'm wrapping each message in a string like msg "{ json: 'encoded' }"; in order to handle them individually (otherwise, I'd receive packets with a random number of concatenated messages).
I'm running two NodeJS instances (server and client) on a CentOS 6.5 VirtualBox VM with bridged network and a Core i3-based host machine. The test lies on the client emitting a request to the server and waiting for the response:
Client connects to the server.
Client outputs current timestamp (Date.now()).
Client emits n requests.
Server replies to n requests.
Client increments a counter on every response.
When finished, client outputs the current timestamp.
The code is quite simple:
Server
var net = require('net');
var server = net.createServer(function(socket) {
socket.setNoDelay(true);
socket.on('data', function(packet) {
// Split packet in messages.
var messages = packet.toString('utf-8').match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
// Emit response:
socket.write('msg "PONG";');
}
});
});
server.listen(9999);
Client
var net = require('net');
var WSClient = new net.Socket();
WSClient.setNoDelay(true);
WSClient.connect(9999, 'localhost', function() {
var req = 0;
var res = 0;
console.log('Start:', Date.now());
WSClient.on('data', function(packet) {
var messages = packet.toString("utf-8").match(/msg "[^"]+";/gm);
for (var i in messages) {
// Get message content (msg "{ content: 'json' }";). Actually useless for the test.
//var message = messages[i].match(/"(.*)"/)[1];
res++;
if (res === 1000) console.log('End:', Date.now());
}
});
// Emit requests:
for (req = 0; req <= 1000; req++) WSClient.write('msg "PING";');
});
My results are:
With 1 request: 9 - 24 ms
With 1000 requests: 478 - 512 ms
With 10000 requests: 5021 - 5246 ms
My pings (ICMP) to localhost are between 0.6 - 0.1 seconds. I've not intense network traffic or CPU usage (running SSH, FTP, Apache, Memcached, and Redis).
Is this normal for NodeJS and TCP or it is just my CentOS VM or my low-performance host? Should I move to another platform like Java or a native C/C++ server?
I think that a 15 ms delay (average) per request on localhost is not acceptable for my project.
Wrapping the messages in some text and searching for a Regex match isn't enough.
The net.Server and net.Socket interfaces have a raw TCP stream as an underlying data source. The data event will fire whenever the underlying TCP stream has data available.
The problem is, you don't control the TCP stack. The timing of it firing data events has nothing to do with the logic of your code. So you have no guarantee that the data event that drives your listeners has exactly one, less than one, more than one, or any number and some remainder, of messages being sent. In fact, you can pretty much guarantee that the underlying TCP stack WILL break up your data into chunks. And the listener only fires when a chunk is available. Your current code has no shared state between data events.
You only mention latency, but I expect if you check, you will also find that the count of messages received (on both ends) is not what you expect. That's because any partial messages that make it across will be lost completely. If the TCP stream sends half a message at the end of chunk 1, and the remainder in chunk 2, the split message will be totally dropped.
The easy and robust way is to use a messaging protocol like ØMQ. You will need to use it on both endpoints. It takes care of framing the TCP stream into atomic messages.
If for some reason you will connecting to or receiving traffic from external sources, they will probably use something like a length header. Then what you want to do is create a Transform stream that buffers incoming traffic, and only emits data when the amount identified in the header has arrived.
Have you done any network dump? You may be creating network congestion due to the overhead introduced by enabling 'no delay' socket property. This property will send data down to TCP stack as soon as possible and if you have very small chunks of information it will lead to many TCP packets with small payloads, thus the decreasing transmission efficiency and eventually having TCP pausing the transmission due to congestion. If u want to use 'no delay' for your sockets, try increasing your receiving socket buffer so that data is pulled from the tcp stack faster. Let us know if that helped.

How to send Websocket messages with the least amount of latency?

I'm working on a Websocket server programmed in Node.js and I'm planning to send out a multicast message to upwards of 20,000 users. The message will be sent out on a regular interval of a second, however I am worried about the performance from the Node.js server.
I understand that Node.js works asynchronously and creates and destroys threads as it requires, but I am unsure of its efficiency. Ideally I would like to send out the message with an average latency of 5ms.
Currently I'm sending out messages to all users through running through a for loop of all the connected clients as following:
function StartBroadcastMessage()
{
console.log("broadcasting socket data to client...!");
for(var i=0;i < clientsWithEvents.length;i++){ //Runs through all the clients
client = clientsWithEvents[i];
if(client.eventid.toLowerCase() == serverEventName.toLowerCase()) //Checks to see if the Client Event names and server names are the same
client.connection.sendUTF(GetEventSocketFeed(client.eventid)); //Sends out event data to that particular client
}
timeoutId = setTimeout(StartBroadcastMessage,2*1000);
}
Is this an efficient way of sending out a multicast message with a low latency, or is there a better way?
Also, is there an efficient way to perform a load test on the server simulating a number of devices connected to the Websocket server? (So far I have found this Node app https://github.com/qarea/websockets-stress-test)
You can use socket.io to broad cast message.
var io = require('socket.io').listen(80);
io.sockets.on('connection', function (socket) {
socket.broadcast.emit('user connected');
});
This will avoid latency(iterating all socket objects and formatting message) in sending individual message to client.

Resources