NodeJS server no http for Redis subscription - node.js

I want to develop a process that only subscribes to a Redis channel and stay alive to handle received messages.
I wrote the following code:
var redis = require("redis");
var sub = redis.createClient({host: process.env.REDIS_HOST});
console.log('subscribing...')
sub.on('subscribe', () => console.log('subscribed'));
sub.on('message', (ch, msg) => console.log(`Received message on ${ch}:${msg}`));
console.log('done')
But obviously it does not work: when launched it goes through all lines and dies. I think I don't need framework like Express because my process does not use http.
How could I write a server that stay alive "forever" without using http frameworks?

You're not subscribing to a channel:
sub.subscribe('channel');

I used the exact code above and the process stays open. In the code above you aren't publishing any messages, therefore you will only see "subscribing..." and "done" printed to the terminal.
Also, as mentioned you aren't subscribing to the channel either.

Related

Attaching event emitter to socket.connected

I am currently writing an Electron app, for which in the main process I want to talk to a server over websockets.
Part of my main process' code base depends on the user's socket connection status. I require the socket.io-client library
const socket = require('socket.io-client')(host);
which gives me access to the variable socket.connected, which is true or false according to whether a connection is established to the server.
Thus, what I would like is to attach an event emitter to this variable. I have made it to work with polling
var events = require('events').EventEmitter;
var event = new events();
// Emits successfully every 200ms 'Connection: true|false'
event.on('status', function(status){
console.log('Connection: ', status);
});
setInterval(function() {
let status = socket.connected;
event.emit('status', status);
}, 200);
but was wondering whether this truly is the way to implement it. To me it seems strange to have to resort to polling in an async framework like nodejs. On the other, I could not find other ways to implement. Best-case scenario would be to attach somehow an event emitter directly to socket.connected, but was unable to find how to do that. Could anybody advise me on a better way to implement?
Thanks
You can get notified of the completion of a client connection with the connect event:
socket.on('connect', function() {
// client socket is now connected to the server
});
socket.on('disconnect', function() {
// client socket is now disconnected from the server
});
Documentation for client events here: http://socket.io/docs/client-api/#manager(url:string,-opts:object). There are other events in that doc if you also want to see other things like a reconnect.

How to use socket.io-redis with multiple servers?

i have following code on two machines
var server = require('http').createServer(app);
io = require('socket.io')(server);
var redisAdapter = require('socket.io-redis');
io.adapter(redisAdaptebr({host: config.redis.host, port: config.redis.port}));
server.listen(config.port, function () {
and I store socket.id of every client connected to these two machines on central db, ID of sockets is being saved and event sending on same server works flawlessly, but when I try to send message to the socket of other server it doesn't work..
subSocket = io.sockets.connected[userSocketID];
subSocket.emit('hello',{a:'b'})
How can i know that redis is wokring good.
How to send message to socket connected on another server.
You can't. Socket.IO requires sticky sessions. The socket must communicate solely with the originating process.
docs
You can have the socket.io servers communicate to each other to pass events around, but the client must continue talking to the process with which it originated.
I'm in a similar issue but I can answer your first question.
you can monitor all the commands processed by redis using that command on the terminal:
redis-cli monitor
http://redis.io/commands/MONITOR
Unfortunately I cannot help you further as I am still having issues even though both server are sending something to redis.

Socket.io 1.3.7 not cleaning up on client disconnect

I have a node.js script which allows a client to connect and receive some realtime data from an external script.
I have just upgraded node.js & socket.io to the current versions (from <0.9) and am trying to get to grips with what happens when a client quits, times out or disconnects from the server.
Here is my current node.js script;
var options = {
allowUpgrades: true,
pingTimeout: 50000,
pingInterval: 25000,
cookie: 'k1'
};
var io = require('socket.io')(8002, options);
cp = require('child_process');
var tail = cp.spawn('test-scripts/k1.rb');
//On connection do the code below//
io.on('connection', function(socket) {
console.log('************ new client connected ****************', io.engine.clientsCount);
//Read from mongodb//
var connection_string = '127.0.0.1:27017/k1-test';
var mongojs = require('mongojs');
var db = mongojs(connection_string, ['k1']);
var k1 = db.collection('k1');
db.k1.find({}, {'_id': 0, "data.time":0}).forEach(function(err, doc) {
if (err) throw err;
if (doc) { socket.emit('k1', doc); }
});
//Run Ruby script & Listen to STDOUT//
tail.stdout.on('data', function(chunk) {
var closer = chunk.toString()
var sampArray = closer.split('\n');
for (var i = 0; i < sampArray.length; i++) {
try {
var newObj = JSON.parse(sampArray[i]);
// DO SOCKET //
socket.emit('k1', newObj);
} catch (err) {}
}
});
socket.on('disconnect', function(){
console.log('****************** user disconnected *******************', socket.id, io.engine.clientsCount);
socket.disconnect();
});
});
In the old version of socket.io when a client exits I get the following logged in debug;
info - transport end (undefined)
debug - set close timeout for client Owb_B6I0ZEIXf6vOF_b-
debug - cleared close timeout for client Owb_B6I0ZEIXf6vOF_b-
debug - cleared heartbeat interval for client Owb_B6I0ZEIXf6vOF_b-
debug - discarding transport
then everything goes quite and all is well.
With the new (1.3.7) version of socket.io when a client exits I get the following logged in debug;
socket.io:client client close with reason transport close +2s
socket.io:socket closing socket - reason transport close +1ms
socket.io:client ignoring remove for -0BK2XTmK98svWTNAAAA +1ms
****************** user disconnected ******************* -0BK2XTmK98svWTNAAAA
note the line socket.io:client ignoring remove for -0BK2XTmK98svWTNAAAA
but after that and with no other clients connected to the server I'm still seeing it trying to write data to a client that already left. (in the example below this is what I get after I've had 2 clients connected, both of which have since disconnected.
socket.io:client ignoring packet write {"type":2,"data":["k1",{"item":"switch2","datapoint":{"type":"SWITCH","state":"0"}}],"nsp":"/"} +1ms
socket.io:client ignoring packet write {"type":2,"data":["k1",{"item":"switch2","datapoint":{"type":"SWITCH","state":"0"}}],"nsp":"/"} +3ms
I'm trying to stop this apparently new behaviour so that once a client has disconnected and the server is idle its not still trying to send data out.
I've been playing about with socket.disconnect and delete socket["id"] but I'm still left with the same thing.
I tried with io.close() which sort of worked - it booted any clients who where actually connected and made them re-connect but still left the server sitting there trying to send updates to the client that had left.
Am I missing something obvious, or has there been a change in the way this is done with the new version of socket.io? There is nothing in the migration doc about this. The only other result I found was this bug report from June 2014 which has been marked as closed. From my reading of it - it appears to be the same problem I'm having but with the current version.
Update: I've done some more testing and added io.engine.clientsCount to both instances of console.log to track what it's doing. It appears when I connect 1 client it gives me 1 (as expected) and when I close that client it changes to 0 (as expected) this leads me to believe that the client connection has been closed and engine.io know this. So why am I still seeing all the 'ignoring packet write' lines and more with every client who has disconnected.
Update 2: I've updated the code above to include the parser section and the DB section - this represents the full node script as there was a thought that I may need to clean up my own clients. I have tried adding the following code to the script in the hope it would but alas not :(
In the connection event I added clients[socket.id] = socket; and the disconnection event I added delete clients[socket.id]; but it didn't change anything (that I could see)
Update 3: Answer thanks to #robertklep It was an 'event handler leak' that I was actually looking for. Having found that I also found this post.
My guess is that the newer socket.io is just showing you (by way of debug messages) a situation that was already happening in the old socket.io, where it just wasn't being logged.
I think the main issue is this setup:
var tail = cp.spawn('test-scripts/k1.rb');
io.on('connection', function(socket) {
...
tail.stdout.on('data', function(chunk) { ... });
...
});
This adds a new handler for each incoming connection. However, these won't miraculously disappear once the socket is disconnected, so they keep on trying to push new data through the socket (whether it's disconnected or not). It's basically an event handler leak, as they aren't getting cleaned up.
To clean up the handlers, you need to keep a reference to the handler function and remove it as a listener in the disconnect event handler:
var handler = function(chunk) { ... }:
tail.stdout.on('data', handler)
socket.on('disconnect', function() {
tail.stdout.removeListener('data', handler);
});
There's also a (slight) chance that you will get ignored packet writes from your MongoDB code, if the socket is closed before the forEach() has finished, but that may be acceptable (since the amount of data is finite).
PS: eventually, you should consider moving the processing code (what handler is doing) to outside the socket code, as it's now being run for each connected socket. You can create a separate event emitter instance that will emit the processed data, and subscribe to that from each new socket connection (and unsubscribe again when they disconnect), so they only have to pass the processed data to the clients.
This is most probably due to your connection is established via polling transport, which is sooo painful for developer. The reason is that this transport uses timeout to determine if the client is here or not.
The behavior you see is due to the client has left but next polling session opening moment has not come yet, and due to it server still thinks that client "it out there".
I have tried to "fight" this problem in many ways (like adding a custom onbeforeunload event on client side to force disconnect) but they all just do not work in 100% cases when polling is used as transport.

Redis pub/sub - same process listening to one channel

I have a single Node.js server - I would like for the process to listen to messages sent from itself - this is for testing only. The problem I am having is that when publishing a message to the same process, the subscriber doesn't seem to receive it at all.
I have this setup:
var redis = require('redis');
var rcPub = redis.createClient();
var rcSub = redis.createClient();
var message = String('testing123');
rcSub.subscribe('redis_channel#test_overall_health');
rcSub.on('message', function (channel, msgs) {
console.log(channel,msgs);
});
rcPub.publish('redis_channel#test_overall_health', message);
I have one redis client that acts as a subscriber and one as a publisher, which is the way you must do it, but for some reason the messages aren't being received. Is there some limitation that a process can't listen to the messages it publishes? It doesn't seem to make sense. I can verify this code is more or less right because other processes listening to the same channel received the message.
Apparently, the SUBSCRIBE command is being sent after the PUBLISH command.
Node's Redis client queues commands until a connection is established to the Redis server and flushes the queued commands to the server when a connect event is received on the socket. The client that initiated the connection first (publisher), will most likely receive the connect event first, at which point it will send its queued commands (publish). Because Redis processes commands in a single thread, the subscriber SUBSCRIBEs only after the PUBLISH command is complete. The other processes are able to receive the messages since they've already subscribed to this channel.
Creating the subscriber client first should work in most cases, albeit a safer approach will be to wait for the subscription to complete before publishing any messages:
var redis = require('redis');
var publisher = redis.createClient(),
subscriber = redis.createClient(),
message = 'testing123';
subscriber.subscribe('redis_channel#test_overall_health');
subscriber.on('message', function (channel, message) {
console.log(channel, message);
});
subscriber.on('subscribe', function (channel, count) {
publisher.publish('redis_channel#test_overall_health', message);
});

node.js, faye (bayeux) - after subscribe event

I have a chat server. And after the clients subscribes I want to look in a DB to see if there is any history for the chat room they subscribed to.
The problem is, that I can only catch "subscribe" events in extension which must do "return callback(message);" to return the message. If I do the history thingy here nothing gets publishes to the clients because client isn't actually subscribed.
Is there any way to know when client ready? Or some event that happens on successfull subscription?
Thanks!
You can attach a callback after creating the subscription that will fire when you are successfully subscribed and another when you fail to subscribe:
var http = require('http');
var faye = require('faye');
var faye_server = new faye.NodeAdapter({mount: '/faye', timeout: 120});
faye_server.listen(8089);
var subscription = faye_server.getClient().subscribe('/testing', function(message){console.log(message);});
subscription.callback(function(){console.log('Subscription successful and ready to use!');});
subscription.errback(function(){console.log('ERROR: Subscription failed!');});
This is documented on the faye mainpage although it's buried a bit. . . http://faye.jcoglan.com/browser/subscribing.html
This works on a node server, node client, or browser client as I've tested it.
Furthermore, what I have been doing to make sure my clients are up and running is this: create client, then try to subscribe to garbage channel name. Once that subscription comes up, fails, or times out (put 5 second time out around it) I take that as my client open success. It's a bit of a round about method, but it's working very well for me and faye makes it pretty clean by using callback and errback just like in my previous example.
Now that's all on the client side, but it gets much easier on the server side: http://faye.jcoglan.com/node/monitoring.html. Just use the extensions here and look for subscribe events from specific clients and you are good to go.
Hope that helps

Resources