Using callbacks with Socket IO - node.js

I'm using node and socket io to stream twitter feed to the browser, but the stream is too fast. In order to slow it down, I'm attempting to use setInterval, but it either only delays the start of the stream (without setting evenly spaced intervals between the tweets) or says that I can't use callbacks when broadcasting. Server side code below:
function start(){
stream.on('tweet', function(tweet){
if(tweet.coordinates && tweet.coordinates != null){
io.sockets.emit('stream', tweet);
}
});
}
io.sockets.on("connection", function(socket){
console.log('connected');
setInterval(start, 4000);
});

I think you're misunderstanding how .on() works for streams. It's an event handler. Once it is installed, it's there and the stream can call you at any time. Your interval is actually just making things worse because it's installing multiple .on() handlers.
It's unclear what you mean by "data coming too fast". Too fast for what? If it's just faster than you want to display it, then you can just store the tweets in an array and then use timers to decide when to display things from the array.
If data from a stream is coming too quickly to even store and this is a flowing nodejs stream, then you can pause the stream with the .pause() method and then, when you're able to go again, you can call .resume(). See http://nodejs.org/api/stream.html#stream_readable_pause for more info.

Related

Does write() (without callback) preserve order in node.js write streams?

I have a node.js program in which I use a stream to write information to a SFTP server. Something like this (simplified version):
var conn = new SSHClient();
process.nextTick(function (){
conn.on('ready', function () {
conn.sftp(function (error, sftp) {
var writeStream = sftp.createWriteStream(filename);
...
writeStream.write(line1);
writeStream.write(line2);
writeStream.write(line3);
...
});
}).connect(...);
});
Note I'm not using the (optional) callback argument (described in the write() API specification) and I'm not sure if this may cause undesired behaviour (i.e. lines not writen in the following order: line1, line2, line3). In other words, I don't know if this alternative (more complex code and not sure if less efficient) should be used:
writeStream.write(line1, ..., function() {
writeStream.write(line2, ..., function() {
writeStream.write(line3);
});
});
(or equivalent alternative using async series())
Empirically in my tests I have always get the file writen in the desired order (I mean, iirst line1, then line2 and finally line3). However, I don't now if this has happened just by chance or the above is the right way of using write().
I understand that writing in stream is in general asynchronous (as all I/O work should be) but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Examples of usage of write() in real programs are very welcomed. Thanks!
Does write() (without callback) preserve order in node.js write streams?
Yes it does. It preserves order of your writes to that specific stream. All data you're writing goes through the stream buffer which serializes it.
but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Yes, all data does go through a stream buffer. The .write() operation does not return until the data has been successfully copied into the buffer unless an error occurs.
Note, that if you are writing any significant amount of data, you may have to pay attention to flow control (often called back pressure) on the stream. It can back up and may tell you that you need to wait before writing more, but it does buffer your writes in the order you send them.
If the .write() operation returns false, then the stream is telling you that you need to wait for the drain event before writing any more. You can read about this issue in the node.js docs for .write() and in this article about backpressure.
Your code also needs to listen for the error event to detect any errors upon writing the stream. Because the writes are asynchronous, they may occur at some later time and are not necessarily reflected in either the return value from .write() or in the err parameter to the .write() callback. You have to listen for the error event to make sure you see errors on the stream.

RxJS: Converting Node.js sockets into Observables and merge them into one stream

I'm trying to convert Node's sockets into streams using RxJS. The goal is to have each socket create it's own stream and have all streams merge into one. As new sockets connect, a stream is created with socketStream = Rx.Observable.fromEvent(socket, 'message').
Then the stream is merged into a master stream with something like
mainStream = mainStream.merge(socketStream)
This appears to work fine, the problem is that after 200-250 client connections, the server throws RangeError: Maximum call stack size exceeded.
I have sample server and client code that demonstrates this behavior on a gist here:
Sample Server and Client
I suspect that as clients connect/disconnect, the main stream doesn't get cleaned up properly.
The problem is that you are merging your Observable recursively. Every time you do
cmdStream = cmdStream.merge(socketStream);
You are creating a new MergeObservable/MergeObserver pair.
Taking a look at the source, you can see that what you are basically doing with each subscription is subscribing to each of your previous streams in sequence so it shouldn't be hard to see that at around 250 connections your call stack is probably at least 1000 calls deep.
A better way to approach this would be to convert use the flatMap operator and think of your connections as creating an Observable of Observables.
//Turn the connections themselves into an Observable
var connections = Rx.Observable.fromEvent(server, 'connection',
socket => new JsonSocket(socket));
connections
//flatten the messages into their own Observable
.flatMap(socket => {
return Rx.Observable.fromEvent(socket.__socket, 'message')
//Handle the socket closing as well
.takeUntil(Rx.Observable.fromEvent(socket.__socket, 'close'));
}, (socket, msg) => {
//Transform each message to include the socket as well.
return { socket : socket.__socket, data : msg};
})
.subscribe(processData, handleError);
The above I haven't tested but should fix your SO error.
I would probably also question the overall design of this. What exactly are you gaining by merging all the Observables together? You are still differentiating them by passing the socket object along with the message so it would seem these could all be distinct streams.

Real time multiplayer game using Node.JS, MongoDB and Socket.IO : Game loop and async calls

I'm currently building a HTML5 game using Node.JS, MongoDB and Socket.IO. The purpose of this project is not really creating a finished, fully playable game but rather understanding and implementing some basic concepts of multiplayer game programming and getting used to MongoDB.
Here is the basic server architecture I went with. The server listens to clients through Socket.IO and everytime a message is received, it push it to a queue. A message is received whenever a player wants to move, or effectively alter the game in a way. I'm staying really vague because it does not really matter to show you in details what this game is. So the server receives messages from all the clients and keeps it in memory for a certain time.
Every 50ms the server processes sequentially all the messages in the queue and makes the game state advance, broadcast the changes to clients and then empties the queue and starts listening again to clients.
I'm having some difficulties building this game loop as I'm not really sure of what does MongoDB and if it does it in time, as all the calls are pure async. Let's say the following code is in my game loop, here is my concerns:
for (var i=0; i<queue.length; i++) {
if(queue[i].message.type === "move") {
//The server has first to ensure that the player can effectively move,
//thus making a query to MongoDB. Once the result has been retrieved,
//the server makes an update in MongoDB with the new player position
}
//At this point, due to the async nature of MongoDB,
//I cannot ensure that the queries have been executed nor that the message was effectively handled
}
//Here again, I cannot ensure that the game state gracefully advanced
//and then I cannot broadcast it.
I think the game loop has to be sequential, but I'm not sure if it's possible to do so using MongoDB and I'm not sure MongoDB is the right tool for that job.
I'm using the official Node.JS driver for MongoDB as I'm more interested in nested documents than in Object Data Modeling.
Do you have any clues on building a sequential game loop in this case ? Or am I using MongoDB in a case outside its purpose ?
Seems fairly straight forward.
The solution is not to use a for loop, as you only want to start the next message being processed after the previous one has completed. For this, it is probably easier to use a library like async and the eachSeries function.
https://github.com/caolan/async#each
async.eachSeries(queue, processMessage, allDone);
function processMessage(message, callback) {
// do stuff with message, don't forget to call callback when you have complete all async calls and are done processing the message!
// eg
if(message.type === "move") {
mongo.insert({player: message.player, x:message.x, y:message.y}, function (err, res) {
// error check etc
callback();
}
}
}
function allDone() {
// called when all messages have been proccessed!
// setTimeout(processNewQueue, 50);
}

Does the new way to read streams in Node cause blocking?

The documentation for node suggests that for the new best way to read streams is as follows:
var readable = getReadableStreamSomehow();
readable.on('readable', function() {
var chunk;
while (null !== (chunk = readable.read())) {
console.log('got %d bytes of data', chunk.length);
}
});
To me this seems to cause a blocking while loop. This would mean that if node is responding to an http request by reading and sending a file, the process would have to block while the chunk is read before it could be sent.
Isn't this blocking IO which node.js tries to avoid?
The important thing to note here is that it's not blocking in the sense that it's waiting for more input to arrive on the stream. It's simply retrieving the current contents of the stream's internal buffer. This kind of loop will finish pretty quickly since there is no waiting on I/O at all.
A stream can be both synchronous and asynchronous. If readable stream synchronously pushes data in the internal buffer then you'll get a synchronous stream. And yes, in that case if it pushes lots of data synchronously node's event loop won't be able to run until all the data is pushed.
Interestingly, if you even remove the while loop in readble callback, the stream module internally calls a while loop once and keeps running until all the pushed data is read.
But for asynchronous IO operations(e.g. http or fs module), they push data asynchronously in the buffer. So the while loop only runs when data is pushed in buffer and stops as soon as you've read the entire buffer.

net module in node.js

I'm trying to make a server based on the net module. what I don't understand is on which event I'm supposed to put the response code:
on(data,function()) could still be in the middle of receiving more data from the stream (so it might be to early to reply)
and on(end,function()) is after the connection is closed .
thank you for your help
The socket event ('data'), calls the callback function every time an incoming data buffer is ready for reading,, and the event emits the socket buffer of data,,
so use this,,
socket.on('data',function(data){
// Here is the function to detect the real data in stream
});
this can help for node v0.6.5, http://nodejs.org/docs/v0.6.5/api/net.html#event_data_
and this for clear understanding for the Readable streames,
http://nodejs.org/docs/v0.6.5/api/streams.html#readable_Stream

Resources