Socket.io loses data on server side - node.js

Yellow,
so, I'm making a multiplayer online game on node (for funzies) and I'm stuck on a problem for over a week now. Perhaps the solution is simple, but I'm oblivious to it.
Long story short:
Data gets sent from client to server, this emit happens every
16.66ms.
Server receives them correctly and we collect all the data (lots of
fireballs in this case). We save them in player.skills_to_execute
array.
Every 5 seconds, we copy the data to seperate array (player_information), because
we are gona clean the current one, so it can keep collecting new
data, and then we send all the collected data back to the client.
Problem is definitely on server side. Sometimes this works, and sometimes it doesn't.
player_information is the array that I'm sending back to front, but before I send it, I do check with console.log on server if it does actually contain the data, and it does! But somehow that data gets deleted/overwritten right before sending it and it sends empty array (cause I check on frontend and I receive empty).
Code is fairly more complex, but I've minimized it here so it's easier to understand it.
This code stays on client side, and works as it should:
// front.js
socket.on("update-player-information", function(player_data_from_server){
console.log( player_data_from_server.skills_to_execute );
});
socket.emit("update-player-information", {
skills_to_execute: "fireball"
});
This code stays on server side, and works as it should:
// server.js
socket.on("update-player-information", function(data){
// only update if there are actually skills received
// we dont want every request here to overwrite actual array with empty [] // data.skills_to_execute = this will usually be 1 to few skills that are in need to be executed on a single client cycle
// naturally, we receive multiple requests in these 5 seconds,
// so we save them all in player object, where it has an array for this
if ( data.skills_to_execute.length > 0 ) {
player.skills_to_execute.push( data.skills_to_execute );
}
});
Now this is the code, where shit hits the fan.
// server.js
// Update player information
setInterval(function(){
// for every cycle, reset the bulk data that we are gona send, just to be safe
var player_information = [];
// collect the data from player
player_information.push(
{
skills_to_execute: player.skills_to_execute
}
);
// we reset the collected actions here, cause they are now gona be sent to front.js
// and we want to keep collecting new skills_to_execute that come in
player.skills_to_execute = [];
socket.emit("update-player-information", player_information);
}, 5000);
Perhaps anybody has any ideas?

Copy the array by value instead of by reference.
Try this:
player_information.push(
{
skills_to_execute: player.skills_to_execute.slice()
}
);
Read more about copying arrays in JavaScript by value or by reference here: Copying array by value in JavaScript

Related

NodeJS can't get memcached values when highloaded

I have an application on NodeJS that uses Cluster, WS, and memcached-client to manage two memcached-servers
During normal times, it works like a charm
But during high load, my application stops working and fetches data from memcached-servers
That is, the logs inside client.get callback do not work, and are not written to the console, when the load is high, therefore the client does not receive its cached value (although it is present on the memcached server and sometimes even with high load it works fine). For a while it will look like it's dead and not doing anything
getValue = function(key, callback){
console.log(`Calculated server for choose: ${strategy(key, client.servers.length)}`) // works with highload
console.log(`Try to get from cache by key: ${key}.`); // works with highload
client.get( key, function(err, data) {
const isError = err || !data // doesn't work with highload
console.log('Data from cache is: ', data) // callback will be never executed
if (!isError) {
console.log(`Found data in cache key-value: ${key} - ${data}`);
}else{
console.log(`Not found value from cache by key: ${key}`);
}
const parsedData = isError ? null : JSON.parse(data.toString())
callback(isError, parsedData); // and this won't work also
});
}
And after some time, socket connection is simply closed (with 1000 code, no errors, looks like user just leaves out)
INFO [ProcessID-100930] Connection close [772003], type [ws], code [1000], message []
Then, after 5-10 seconds, all processes start working again as if nothing had happened and the memcached-client callback starts to execute correctly
I've been trying for so long to catch this moment and understand why this is happening, but I still don't understand. I have changed already several memcached clients(memjs now, memcached, mc) but still get the same behavior under high load
When receiving data from memcached-server, the callback simply does not work, and data from the memcached is not returned (although judging by the memcached logs, they were there at that moment)
Can someone suggest please?

web3js subscribe logs to fast for Javascript to handle

I am using web3js to subscribe to logs, I listening to swap events, the problem is that the .on(data) is so fast in giving data JavaScript can not keep up. lets say I add a variable let count = 0; each time I get a new log I increase the number ++count, sometimes the logs come so fast I get a double number.
The real problem is I need it to be in the exact order as it is coming in, that's why I give the number to each log, but that does not work.
How would I make sure that each data item I get from the log events that they are in order?
I tried to create a promise sequence
let sequence = Promise.resolve();
let count = 0;
web3.eth.subscribe('logs', {
fromBlock: block,
topics: [
[swapEvent]
]
}).on('data', (logData)=>{
sequence = sequence.then(()=>{
++count
processData(logData)
})
});
function processData(){
return new Promise(resolve=>{
// do some stuff
resolve();
})
};
In a simple test with a loop and random time to resolve this works fine, but in the actual code with socket it does not keep the order.
Anyone has some idea how I can make the socket data keep in order and process one by one?
Not sure why but my problem got solved with this.
sequence = sequence.then(()=>processData(logData))
before it was
sequence = sequence.then(()=>{
processData(logData)
})
Now its doing all in sequence.

SocketIO limit requests on progress watching

I'm using SocketIO for a small app, where users would receive updates whenever a change occurs. However, i'd like to implement it to have real time progress visualization in tasks that are done in server side.
However, if the task progress changes too fastly, this would result in tons of event emissions and i think this could decrease the app performance. Is there a way to limit event emits to a maximum of N per second (Emitting only the last one, with the last percent of the progress) ?
Yes, that can be done. It requires you to hold events for a short time to see if there are more events of the same kind coming and thus combine all of them into one. I will code up an example in a few minutes here.
Here's a general idea for how you could do this:
function emitMessageLast(socket, msg, data) {
const queueTime = 500; // wait for up to 500ms of idle time before sending latest data
const longestWaitTime = 2000; // wait no more than 2 seconds if data is being continuously sent
function stopTimer() {
if (socket._timer) {
clearTimeout(socket._timer);
socket._timer = null;
}
}
function sendNow() {
socket._lastMsg = msg;
socket._lastTime = Date.now();
return socket.emit(msg, data);
}
// if this is the first time we're sending this message
// or it's been awhile since we last sent data
// just send the new data immediately
if (socket._lastMsg !== msg || !socket._lastTime || Date.now() - socket._lastTime > longestWaitTime) {
stopTimer();
return sendNow();
}
// at this point, we know we're sending the same message as has recently been sent
socket._lastMsg = msg;
socket._lastData = data;
stopTimer();
// set a timer so that if no more data has arrived before the timer fires,
// we sent the last data we saved
socket._timer = setTimeout(() => {
socket._timer = null;
sendNow();
}, queueTime);
}
The general idea for this code is as follows:
When you get called with a message to send and no message of the same type has been recently sent, then send this one immediately and record the time it was sent.
When you get called with a message to send and it's been more than longestWaitTime since you last sent a message, then send this one immediately. This means if the server is continuously sending data, the server will wait for up to longestWaitTime before sending the latest value of the data.
When the server is sending data sporadically, it will wait up to queueTime (waiting to see if there's more data coming) before sending the last piece of data. It is essentially buffering the last message until no more messages have been send in the last queueTime and then a timer will fire off that last message.
I've configured the defaults here so that it will delay sending data to the client for up to 500ms (while waiting to see if the server is about to send more data so it can avoid sending all the intermediate values of the data) figuring that if the client updates its status every 500ms, that is plenty often. And, if the server is continuously sending updates, then the server will skip up to 2000ms of updates to send just the one last update. Again, you can set these numbers however you see appropriate.

socket.io how to send multiple messages sequentially?

I'm using socket.io like this
Client:
socket.on('response', function(i){
console.log(i);
});
socket.emit('request', whateverdata);
Server:
socket.on('request', function(whateverdata){
for (i=0; i<10000; i++){
console.log(i);
socket.emit('response', i);
}
console.log("done!");
});
I need output like this when putting the two terminals side by side:
Server Client
0 0
1 1
. (etc) .
. .
9998 9998
9999 9999
done!
But instead I am getting this:
Server Client
0
1
. (etc)
.
9998
9999
done!
0
1
.
. (etc)
9998
9999
Why?
Shouldn't Socket.IO / Node emit the message immediately, not wait for the loop to complete before emitting any of them?
Notes:
The for loop is very long and computationally slow.
This question is referring to the socket.io library, not websockets in general.
Due to latency, waiting for confirmation from the client before sending each response is not possible
The order that the messages are received is not important, only that they are received as quickly as possible
The server emits them all in a loop and it takes a small bit of time for them to get to the client and get processed by the client in another process. This should not be surprising.
It is also possible that the single-threaded nature of Javascript in node.js prevents the emits from actually getting sent until your Javascript loop finishes. That would take detailed examination of socket.io code to know for sure if that is an issue. As I said before if you want to 1,1 then 2,2 then 3,3 instead of 1,2,3 sent, then 1,2,3 received you have to write code to force that.
If you want the client to receive the first before the server sends the 2nd, then you have to make the client send a response to the first and have the server not send the 2nd until it receives the response from the first. This is all async networking. You don't control the order of events in different processes unless you write specific code to force a particular sequence.
Also, how do you have client and server in the same console anyway? Unless you are writing out precise timestamps, you wouldn't be able to tell exactly what event came before the other in two separate processes.
One thing you could try is to send 10, then do a setTimeout(fn, 1) to send the next 10 and so on. That would give JS a chance to breathe and perhaps process some other events that are waiting for you to finish to allow the packets to get sent.
There's another networking issue too. By default TCP tries to batch up your sends (at the lowest TCP level). Each time you send, it sets a short timer and doesn't actually send until that timer fires. If more data arrives before the timer fires, it just adds that data to the "pending" packet and sets the timer again. This is referred to as the Nagle's algorithm. You can disable this "feature" on a per-socket basis with socket.setNoDelay(). You have to call that on the actual TCP socket.
I am seeing some discussion that Nagle's algorithm may already be turned off for socket.io (by default). Not sure yet.
In stepping through the process of socket.io's .emit(), there are some cases where the socket is marked as not yet writable. In those cases, the packets are added to a buffer and will be processed "later" on some future tick of the event loop. I cannot see exactly what puts the socket temporarily in this state, but I've definitely seen it happen in the debugger. When it's that way, a tight loop of .emit() will just buffer and won't send until you let other events in the event loop process. This is why doing setTimeout(fn, 0) every so often to keep sending will then let the prior packets process. There's some other event that needs to get processed before socket.io makes the socket writable again.
The issue occurs in the flush() method in engine.io (the transport layer for socket.io). Here's the code for .flush():
Socket.prototype.flush = function () {
if ('closed' !== this.readyState &&
this.transport.writable &&
this.writeBuffer.length) {
debug('flushing buffer to transport');
this.emit('flush', this.writeBuffer);
this.server.emit('flush', this, this.writeBuffer);
var wbuf = this.writeBuffer;
this.writeBuffer = [];
if (!this.transport.supportsFraming) {
this.sentCallbackFn.push(this.packetsFn);
} else {
this.sentCallbackFn.push.apply(this.sentCallbackFn, this.packetsFn);
}
this.packetsFn = [];
this.transport.send(wbuf);
this.emit('drain');
this.server.emit('drain', this);
}
};
What happens sometimes is that this.transport.writable is false. And, when that happens, it does not send the data yet. It will be sent on some future tick of the event loop.
From what I can tell, it looks like the issue may be here in the WebSocket code:
WebSocket.prototype.send = function (packets) {
var self = this;
for (var i = 0; i < packets.length; i++) {
var packet = packets[i];
parser.encodePacket(packet, self.supportsBinary, send);
}
function send (data) {
debug('writing "%s"', data);
// always creates a new object since ws modifies it
var opts = {};
if (packet.options) {
opts.compress = packet.options.compress;
}
if (self.perMessageDeflate) {
var len = 'string' === typeof data ? Buffer.byteLength(data) : data.length;
if (len < self.perMessageDeflate.threshold) {
opts.compress = false;
}
}
self.writable = false;
self.socket.send(data, opts, onEnd);
}
function onEnd (err) {
if (err) return self.onError('write error', err.stack);
self.writable = true;
self.emit('drain');
}
};
Where you can see that the .writable property is set to false when some data is sent until it gets confirmation that the data has been written. So, when rapidly sending data in a loop, it may not be letting the event come through that signals that the data has been successfully sent. When you do a setTimeout() to let some things in the event loop get processed that confirmation event comes through and the .writable property gets set to true again so data can again be sent immediately.
To be honest, socket.io is built of so many abstract layers across dozens of modules that it's very difficult code to debug or analyze on GitHub so it's hard to be sure of the exact explanation. I did definitely see the .writable flag as false in the debugger which did cause a delay so this seems like a plausible explanation to me. I hope this helps.

node - send large JSON over net socket

The problem is that sending large serialized JSON (over 16,000 characters) over a net socket gets split into chunks. Each chunk fires the data event on the receiving end. So simply running JSON.parse() on the incoming data may fail with SyntaxError: Unexpected end of input.
The work around I've managed to come up with so far is to append a null character ('\u0000') to the end of the serialized JSON, and check for that on the receiving end. Here is an example:
var partialData = '';
client.on( 'data', function( data ) {
data = data.toString();
if ( data.charCodeAt( data.length - 1 ) !== 0 ) {
partialData += data;
// if data is incomplete then no need to proceed
return;
} else {
// append all but the null character to the existing partial data
partialData += data.substr( 0, data.length - 1 );
}
// pass parsed data to some function for processing
workWithData( JSON.parse( partialData ));
// reset partialData for next data transfer
partialData = '';
});
One of the failures of this model is if the receiver is connected to multiple sockets, and each socket is sending large JSON files.
The reason I'm doing this is because I need to pass data between two processes running on the same box, and I prefer not to use a port. Hence using a net socket. So there would be two questions: First, is there a better way to quickly pass large JSON data between two Node.js processes? Second, if this is the best way then how can I better handle the case where the serialized JSON is being split into chunks when sent?
You can use try...catch every time to see if it is a valid json. Not very good performance though.
You can calculate size of your json on sending side and send it before JSON.
You can append a boundary string that's unlikely be in JSON. Your \u0000 - yes, it seems to be a legit way. But most popular choice is newline.
You can use external libraries like dnode which should already do something I mentioned before. I'd recommend trying that. Really.
One of the failures of this model is if the receiver is connected to multiple sockets, and each socket is sending large JSON files.
Use different buffers for every socket. No problem here.
It is possible to identify each socket individually and build buffers for each one. I add an id to each socket when I receive a connection and then when I receive data I add that data to a buffer.
net.createServer( function(socket) {
// There are many ways to assign an id, this is just an example.
socket.id = Math.random() * 1000;
socket.on('data', function(data) {
// 'this' refers to the socket calling this callback.
buffers[this.id] += data;
});
});
Each time you can check if you have received that "key" delimiter that will tell you that a buffer is ready to be used.

Resources