How to customize socket.io's connection event by room? - node.js

When a socket connects to a socket.io server, a connection event is triggered. The function that handles such event can be overrided to add custom behavior to the connection.
I want to add a function to handle the connection event, but for a socket.io room.
I was trying something like (with socket.io 0.9.14):
io.sockets.in('someNamespace/someRoom').on('connection', function () {
//custom actions here
}
But it doesn't seem to work. The behavior is attached to the default connection event and triggered on every connection.
Is it possible to achieve what I'm trying? any suggestions?
Thanks in advance.

If I've understood your question clearly, you're trying to use namespaces. Then you should do it like this:
io.of('/someNamespace/someRoom').on('connection', function (socket) {
//your actions here
});

It is possible to simulate a similar behavior (similar to the on-connection handler) for rooms, using the callback (or fn) argument for the socket.io 0.9.14 join function, like this:
socket.join('some/room', function handleJoin () {
//do something when socket joins to 'some/room'
})
For the version 0.9.14 of socket.io, when the join function has a second argument, a warning is displayed telling:
Client#join callback is deprecated
But for the current one, in the master branch (version 1.0.0-pre), the warning is no longer there and the callback function can be used for error handling on room joining. So, it might be safe to rely on the existence of this argument in the future.
Finally, by using a wrapper function on the join call, it is possible to preserve the closure for the call, thus providing the same arguments that an on connection handler normally takes.
e.g. (in some place inside the server)
var someVar = 'some value';
socket.join('some/room', function () {
handleJoin(someVar);
});
And then, somevar will be available inside the handleJoin function (somevar could be the room name, for example). Of course, it is a very simple example and it could be adapted to meet different requirements or to comply with the upcoming version of the join function.
Hope this helps.

Related

how to remove listener from inside the callback function in node.js

I set up a listener for an event emitter and what I want to do is to remove the same listener if I get certain events. The problem I am running into is that I don't know how to pass the callback function to removeListener inside the callback function. I tried "this", but it errors out. Is there any ways to achieve this? By the way, I am not using once because I am only removing the listener on a certain event.
P.S. I am using redis here so whatever message I receive I would always be listening on the key "message". It would not be possible to just listen on different keys. Channel wouldn't help either because I only want to remove a specific listener.
Also, what I want to do is communication between two completely independent process. No hierarchy of any kind. In process B, there are many independent functions that will get data from process A. My initial thought was using a message queue, but with that I cannot think of a way to ensure that each function in B will get the right data from A.
One cool thing about closures is that you can assign them a name, and that name can be used internally by the function. I haven't tested this, but you should try:
object.on('event', function handler() {
// do stuff
object.off('event', handler);
});
You should also look into whether your event emitter supports namespaces. That would let you do something like:
object.on('event.namespace', function() {
// do stuff
object.off('.namespace');
});

node.js - ensuring one event occurs before another

I'm new to node.js and learning...
I have the following 2 socket.io listeners and I need to ensure the .on happens before the .once (currently the .once occurs first):
io.on('connection', function (socket) {
io.once('connection', function (socket) {
is there a good way to ensure the .once occurs fist always?
This may not work because you are creating two independent functions. Usually, for event listeners, that is good. In this case, you need them coupled.
What you can do is subscribe using on and have the handler figure out if it has run before or not.
io.on('connection', function (socket) {
if (!this.called) {
// do what you want to do only the first time
this.called = true;
}
// do whatever you want to do every time
}.bind({ called: false });
This creates a wrapper function which keeps track of whether or not it has been called and does something special when it is called for the first time.
So when you have these order dependencies, it's best to code for them explicitly. Events are great when operations are truly independent, but when operations have interdependencies, better to make that obvious in code. In general, don't assume anything about the order in which event sources will invoke the bound event handler functions. Therefore, try something like this pseudocode:
io.on('connection', firstConnection);
function firstConnection(socket) {
//do first connectiony stuff here
io.on('connection', allConnections);
io.off('connection', firstConnection);
allConnections(socket);
}
function allConnections(socket){ /*all connectiony stuff here */}

How to ensure connection.end() is called when obtaining a database connection from a pool

We are using node-mysql and I'm exposing createPool of mysql where from any file, we can require the database file and obtain a connection like this
var db = ("./database");
db(function(err, connection) {
//use connection to do stuff
}
I noticed that people don't always remember to call connection.end() that would return the connection to the pool and then we hit the limit..
How can I design the acquiring of the connection so that no matter when they decide to terminate the callback function that the connection.end() is called? I can't figure out a way to make a single place where we can do this so developers are only concerned with getting the connection..
I don't know the createPool from mysql, but can't you just wrap it?
People will provide a function stuff(err, connection) { ... do whatever they want ... }.
Why don't you take that function and create a save_stuff-function? Something like
function save_stuff_creator(stuff) {
return function(err, connection) { stuff(err, connection); connection.end() }
}
You maybe want some try..catch around the stuff()-call.
If you want the connection to stay with someone for some other callbacks, you could modify the function to something like stuff(connection, callback) and use a module like async and a series-Object.
But I have no idea how to enforce a final call of connection.end()if you want to wait for the end of "the user thread": that is actually the issue. There is no such thread and there is no way of figuring out which event in the event loop comes from whom. As far as I know the event loop is not exposed to the js-developer.
If you can't trust your callee, maybe you can put his code in an extra child node (see module cluster. So you have a bit of control: when the child finishes you become notified.

How to remove different listeners on the same object that are also listening to the same event?

Does an event and a listener on a certain object act as an "identifying pair" for that listener? Or just the event on the object?
reading over node.js documentation here:
http://nodejs.org/api/events.html#events_emitter_removelistener_event_listener
For example, if you have two callback functions listener_1 and listener_2:
var stdin = process.stdin;
stdin.on('data',listener_1);
stdin.on('data',listener_2);
then you remove the listener, with:
stdin.removeListener('data',listener_1);
So, is listener_2 still listening?
Thank you.
ps. I tried test myself using util.inspect and listeners method but still not confident I understood how this works!
If you want to remove all the listeners, you can use
stdin.removeAllListeners('data')
Otherwise, after calling
stdin.removeListener('data',listener_1);
listener_2 is still listening.
You can use an anonymous function but you need to save it somewhere.
var listener = function(){};
emitter.on('event', listener);
emitter.removeListener('event', listener);
But that means you can't use bind or the arrow function closure notation:
emitter.on('event', listener.bind(this));// bind creates a new function every time
emitter.removeListener('event', listener.bind(this));// so this doesn't work
emitter.on('event', ()=>{});// closure creates a new function every time
Which is annoying. This works though:
emitter.on('event', this.eventListener = () => {});
emitter.removeListener('event', this.eventListener);
So does this (storing listeners in a map):
emitter.on('event', this.listeners['event'] = this.myEventListener.bind(this));
emitter.removeListener('event', this.listeners['event']);
This is not always an issue:
In the most common case there is only one listener.
In the second most common case, there can be more than one but they all want removing together (e.g. because the emitter has finished its job).
Either way, you won't need to specify the function. However when you do, you do.

How to avoid the need to delay event emission to the next tick of the event loop?

I'm writing a Node.js application using a global event emitter. In other words, my application is built entirely around events. I find this kind of architecture working extremely well for me, with the exception of one side case which I will describe here.
Note that I do not think knowledge of Node.js is required to answer this question. Therefore I will try to keep it abstract.
Imagine the following situation:
A global event emitter (called mediator) allows individual modules to listen for application-wide events.
A HTTP Server is created, accepting incoming requests.
For each incoming request, an event emitter is created to deal with events specific to this request
An example (purely to illustrate this question) of an incoming request:
mediator.on('http.request', request, response, emitter) {
//deal with the new request here, e.g.:
response.send("Hello World.");
});
So far, so good. One can now extend this application by identifying the requested URL and emitting appropriate events:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
emitter.emit('root');
}
else {
emitter.emit('404');
}
});
Following this one can write a module that will deal with a root request.
mediator.on('http.request', function(request, response, emitter) {
//when root is requested
emitter.once('root', function() {
response.send('Welcome to the frontpage.');
});
});
Seems fine, right? Actually, it is potentially broken code. The reason is that the line emitter.emit('root') may be executed before the line emitter.once('root', ...). The result is that the listener never gets executed.
One could deal with this specific situation by delaying the emission of the root event to the end of the event loop:
mediator.on('http.request', request, response, emitter) {
//identify the requested URL
if (request.url === '/') {
process.nextTick(function() {
emitter.emit('root');
});
}
else {
process.nextTick(function() {
emitter.emit('404');
});
}
});
The reason this works is because the emission is now delayed until the current event loop has finished, and therefore all listeners have been registered.
However, there are many issues with this approach:
one of the advantages of such event based architecture is that emitting modules do not need to know who is listening to their events. Therefore it should not be necessary to decide whether the event emission needs to be delayed, because one cannot know what is going to listen for the event and if it needs it to be delayed or not.
it significantly clutters and complexifies code (compare the two examples)
it probably worsens performance
As a consequence, my question is: how does one avoid the need to delay event emission to the next tick of the event loop, such as in the described situation?
Update 19-01-2013
An example illustrating why this behavior is useful: to allow a http request to be handled in parallel.
mediator.on('http.request', function(req, res) {
req.onceall('json.parsed', 'validated', 'methodoverridden', 'authenticated', function() {
//the request has now been validated, parsed as JSON, the kind of HTTP method has been overridden when requested to and it has been authenticated
});
});
If each event like json.parsed would emit the original request, then the above is not possible because each event is related to another request and you cannot listen for a combination of actions executed in parallel for a specific request.
Having both a mediator which listens for events and an emitter which also listens and triggers events seems overly complicated. I'm sure there is a legit reason but my suggestion is to simplify. We use a global eventBus in our nodejs service that does something similar. For this situation, I would emit a new event.
bus.on('http:request', function(req, res) {
if (req.url === '/')
bus.emit('ns:root', req, res);
else
bus.emit('404');
});
// note the use of namespace here to target specific subsystem
bus.once('ns:root', function(req, res) {
res.send('Welcome to the frontpage.');
});
It sounds like you're starting to run into some of the disadvantages of the observer pattern (as mentioned in many books/articles that describe this pattern). My solution is not ideal – assuming an ideal one exists – but:
If you can make a simplifying assumption that the event is emitted only 1 time per emitter (i.e. emitter.emit('root'); is called only once for any emitter instance), then perhaps you can write something that works like jQuery's $.ready() event.
In that case, subscribing to emitter.once('root', function() { ... }) will check whether 'root' was emitted already, and if so, will invoke the handler anyway. And if 'root' was not emitted yet, it'll defer to the normal, existing functionality.
That's all I got.
I think this architecture is in trouble, as you're doing sequential work (I/O) that requires definite order of actions but still plan to build app on components that naturally allow non-deterministic order of execution.
What you can do
Include context selector in mediator.on function e.g. in this way
mediator.on('http.request > root', function( .. ) { } )
Or define it as submediator
var submediator = mediator.yield('http.request > root');
submediator.on(function( ... ) {
emitter.once('root', ... )
});
This would trigger the callback only if root was emitted from http.request handler.
Another trickier way is to make background ordering, but it's not feasible with your current one mediator rules them all interface. Implement code so, that each .emit call does not actually send the event, but puts the produced event in list. Each .once puts consume event record in the same list. When all mediator.on callbacks have been executed, walk through the list, sort it by dependency order (e.g. if list has first consume 'root' and then produce 'root' swap them). Then execute consume handlers in order. If you run out of events, stop executing.
Oi, this seems like a very broken architecture for a few reasons:
How do you pass around request and response? It looks like you've got global references to them.
If I answer your question, you will turn your server into a pure synchronous function and you'd lose the power of async node.js. (Requests would be queued effectively, and could only start executing once the last request is 100% finished.)
To fix this:
Pass request & response to the emit() call as parameters. Now you don't need to force everything to run synchronously anymore, because when the next component handles the event, it will have a reference to the right request & response objects.
Learn about other common solutions that don't need a global mediator. Look at the pattern that Connect was based on many Internet-years ago: http://howtonode.org/connect-it <- describes middleware/onion routing

Resources