Identifying Redis Master using sentinal from nodeJS - node.js

I have 2 Redis servers one master and the other slave(replication). Once the Master is down due to some reasons the slave will become Master and it continues to act as Master till something wrong happens to that server.
I have a nodeJS server from which i want to push data to the Redis which currently running as Master. I have a sentinel which monitors the Redis servers but my question is how to fetch master information from sentinel using nodeJS?
and if there is a way, does it automatically push data to alternative redis server without any service restart?

The ioredis supports the sentinel. like this:
var redis = new Redis({
sentinels: [{ host: 'localhost', port: 26379 }, { host: 'localhost', port: 26380 }],
name: 'mymaster'
});
redis.set('foo', 'bar');
The name identifies a redis cluster, which you should also specified in redis sentinel.conf. You could refer this page on how to configure sentinel.
About your second question, see below:
ioredis guarantees that the node you connected to is always a master even after a failover. When a failover happens, instead of trying to reconnect to the failed node (which will be demoted to slave when it's available again), ioredis will ask sentinels for the new master node and connect to it. All commands sent during the failover are queued and will be executed when the new connection is established so that none of the commands will be lost.
And node-redis does not support it now.

Related

Why does connecting to a cluster constantly loop in IoRedis?

I am currently trying to connect to my Redis cluster stored on another instance from a server running my application. I am using IoRedis to interface between my application and my Redis instance and it worked fine when there was only a single Redis node running. However, after trying to setup the cluster connection in my Node application, it constantly loops on the connection. My cluster setup works correctly.
As of now, I have tried the following configuration in my application to connect to the cluster. The issue is that the 'connect' even constantly loops printing out 'Connected to Redis!'. The events for 'ready' and 'error' are never fired.
const cache: Cluster = new Cluster([{
port: 8000,
host: REDIS_HOST
}, {
port: 8001,
host: REDIS_HOST
}, {
port: 8002,
host: REDIS_HOST
}]);
cache.on('connect', () => {
console.log('Connected to Redis!');
});
In the end, the 'connect' event should only fire once. Does anyone have any thoughts on this?
This kind of error, as I discovered it today, is not related to ioredis but to the redis instance setup. In my case, the problem I had with p3x-redis-ui, which use ioredis, it was the cluster that was not initialized.
See https://github.com/patrikx3/redis-ui/issues/48
maybe you'll find any clues to help you resolve your bug.

Connection to Redis throwing connection timeout from NodeJS

I have a redis server in an azure linux vm running one master, slave and sentinel in the same VM(A). When i try to connect to the redis sentinal from another VM(B) using redis-cli, i am able to connect and set and get values. But when i try to connect to the redis sentinel using ioredis module in nodeJS from VM(B), it is throwing a connection timeout error. I use the following code snippet to connect to the sentinel from node application
var Redis = require('ioredis');
var redis = new Redis({
sentinels: [{ host: 'x.x.x.x', port: 26379}],
name: 'mymaster'
});
The confusing part is, when i run the redis master, slave and sentinel in the same vm(A) and using '127.0.0.1' instead of 'x.x.x.x' the code works fine.
Any help is much appreciated.

Scaling Socket.IO across multiple servers

I've been searching around looking for help on setting up a multi-server cluster for a Node.js Socket.IO install. This is what I am trying to do:
Have 1 VIP in an F5 loadbalancer, pointing to n number of Node servers running Express, and Socket.IO
Have client connect to that 1 VIP via io.connect and then have it filter to one the of the servers behind the loadbalancer.
When a message is emitted on any one of those servers, it is is sent to all users who are listening for that event and connect via the other servers.
For example - if we have Server A, Server B and Server C behind LB1 (F5), and User A is connected to Server A, User B is connected to Server B and User C is connected to Server C.
In a "chat" scenario - basically if a message is emitted from Server A to message event - Server B and C should also send the message to their connected client. I read that this is possible with using socket-io.redis, but it needs a Redis box - which server should that be install on? If all the servers are connected to the same Redis box - does this work automatically?
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis({ host: 'localhost', port: 6379 }));
Any help would be greatly appreciated thanks!
The answer to this question is that you must set up a single Redis server that is either outside your SocketIO cluster - and have all nodes connect to it.
Then you simply add this at the top of your code and it just works magically without any issues.
var io = require('socket.io')(server);
var redis = require('socket.io-redis');
io.adapter(redis({ host: 'localhost', port: 6379 }));

How to use Redis as a session storage for multiple aws instances or ELB?

I am beginner to use redis-server in my nodejs application I am using redis server as a session-store for my app as:
var RedisStore = require('connect-redis')(express);
var admin_session = express.session({
key: 'admin_token',
store: new RedisStore({
host: 'localhost',
port: 6379,
db: 2
// pass: 'RedisPASS'
}),
secret: 'aersda##$32sfas2342'
});
This is working fine for single instance. But my query is for multiple instance serving from aws elb.
Actually I want redis server as a cluster which clear the session for all the instances if any changes is available inside any of the instance and cleared all the code level caching.
If its possible can anybody help me how to do this and whats the steps?
Thanks in advance,
Vijay
This is a fairly standard approach for sessions when you have multiple web servers behind a load balancer.
If you run a redis server on every web instance, then you need to enable sticky sessions on your load balancer. This will work fine until you start auto-scaling: as soon as one of your web instances is removed from the pool, anyone with a session on that instance will lose it.
So you want to run a shared redis (or memcached) server for your caching and/or sessions. With elasticache, you have the option of running a single node "cluster" and using the standard way of connecting, or running a true cluster and using the AWS library to connect to it.
For sessions, I would probably just use a single node and not bother with the elasticache client.
If you want to clear your caches when you do a deployment, you'll have to have a hook and write code to do this. Or you could simply spin up a new Elasticache server, update your configurations, and destroy the old one.

Load Balance: Node.js - Socket.io - Redis

I have 3 Servers running NodeJs, and they are related each other with Redis (1 master, 2 slaves).
The issue i'm having is that running the system on a single server works fine, but when I scale it to 3 NodeJS servers, it starts missing messages and the system gets unstable.
My load balancer does not accept sticky sessions. So every time that the requests from the client arrives to it, they can go to a different server.
I'm pointing all the NodeJS servers to the Redis Master.
It looks like socket.io is storing information on each server and it is not being distributed with redis.
I'm using socket.io V9, I'm suspecting that I don't have any handshake code, could this be the reason?
My code to configure socket.io is:
var express = require('express');
var io = require('socket.io');
var redis = require('socket.io/node_modules/redis');
var RedisStore = require('socket.io/lib/stores/redis');
var pub = redis.createClient("a port", "an ip");
var sub = redis.createClient("a port", "an ip");
var client = redis.createClient("a port", "an ip");
var events = require('./modules/eventHandler');
exports.createServer = function createServer() {
var app = express();
var server = app.listen(80);
var socketIO = io.listen(server);
socketIO.configure(function () {
socketIO.set('store', new RedisStore({
redisPub: pub,
redisSub: sub,
redisClient: client
}));
socketIO.set('resource', '/chat/socket.io');
socketIO.set('log level', 0);
socketIO.set('transports', [, 'htmlfile', 'xhr-polling', 'jsonp-polling']);
});
// attach event handlers
events.attachHandlers(socketIO);
// return server instance
return server;
};
Redis only syncs from the master to the slaves. It never syncs from the slaves to the master. So, if you're writing to all 3 of your machines, then the only messages that will wind up synced across all three servers will be the ones hitting the master. This is why it looks like you're missing messages.
More info here.
Read only slave
Since Redis 2.6 slaves support a read-only mode that
is enabled by default. This behavior is controlled by the
slave-read-only option in the redis.conf file, and can be enabled and
disabled at runtime using CONFIG SET.
Read only slaves will reject all
the write commands, so that it is not possible to write to a slave
because of a mistake. This does not mean that the feature is conceived
to expose a slave instance to the internet or more generally to a
network where untrusted clients exist, because administrative commands
like DEBUG or CONFIG are still enabled. However security of read-only
instances can be improved disabling commands in redis.conf using the
rename-command directive.
You may wonder why it is possible to revert
the default and have slave instances that can be target of write
operations. The reason is that while this writes will be discarded if
the slave and the master will resynchronize, or if the slave is
restarted, often there is ephemeral data that is unimportant that can
be stored into slaves. For instance clients may take information about
reachability of master in the slave instance to coordinate a fail over
strategy.
I arrived to this post:
It can be a good idea to have a "proxy" between nodejs servers and the load balancer.
With this approach XHR-Polling can be used in load balancers without Sticky sessions.
Load balancing with node.js using http-proxy
using nodejs-http-proxy i can have custom routing route, ex. by adding a parameter on the "connect url" of socket.io.
Anyone tried this solution before?

Resources