I'm confused as to how to connect to AWS's ElastiCache Redis via Node.js. I've successfully managed to connect to the primary host (001) via the node_redis NPM, but I'm unable to use the clustering ability of ioredis because apparently ElastiCache doesn't implement the CLUSTER commands.
I figured that there must be another way, but the AWS SDK for Node only has commands for managing ElastiCache, not for actually connecting to it.
Without using CLUSTER, I'm concerned that my app won't be able to fail over if the master node fails, since I can't fall back to the other clusters. I also get errors from my Redis client, Error: READONLY You can't write against a read only slave. when the master switches, which I'm not sure how to handle gracefully.
Am I overthinking this? I am finding very little information about using ElastiCache Redis clusters with Node.js.
I was overthinking this.
Q: What options does Amazon ElastiCache for Redis provide for node failures?
Amazon ElastiCache for Redis will repair the node by acquiring new service resources, and will then redirect the node's existing DNS name to point to the new service resources. Thus, the DNS name for a Redis node remains constant, but the IP address of a Redis node can change over time. If you have a replication group with one or more read replicas and Multi-AZ is enabled, then in case of primary node failure ElastiCache will automatically detect the failure, select a replica and promote it to become the new primary. It will also propagate the DNS so that you can continue to use the primary endpoint and after the promotion it will point to the newly promoted primary. For more details see the Multi-AZ section of this FAQ. When Redis replication option is selected with Multi-AZ disabled, in case of primary node failure you will be given the option to initiate a failover to a read replica node. The failover target can be in the same zone or another zone. To failback to the original zone, promote the read replica in the original zone to be the primary. You may choose to architect your application to force the Redis client library to reconnect to the repaired Redis server node. This can help as some Redis libraries will stop using a server indefinitely when they encounter communication errors or timeouts.
The solution is to connect to the primary master node only, without using any clustering on the client side. When the master fails, the slave is promoted and the DNS is updated so that the slave will become the primary node, without the host needing to change on the client's side.
To prevent temporary connectivity errors when the failover happens, you can add some configuration to ioredis:
var client = new Redis(port, host, {
retryStrategy: function (times) {
log.warn('Lost Redis connection, reattempting');
return Math.min(times * 2, 2000);
},
reconnectOnError: function (err) {
if (err.message.slice(0, targetError.length) === 'READONLY') {
// When a slave is promoted, we might get temporary errors saying
// READONLY You can't write against a read only slave. Attempt to
// reconnect if this happens.
log.warn('ElastiCache returned a READONLY error, reconnecting');
return 2; // `1` means reconnect, `2` means reconnect and resend
// the failed command
}
}
});
Related
My NodeJS app is working with ReplicaSet of mongo. I want to client read data from secondary, so I set readPreference=secondary, but If secondary is down, NodeJS app cannot read data from mongo. With option secondaryPreferred, if no secondary is available, NodeJS can read data from primary instance. But if have no primary available, have only secondary available, I cannot start NodeJS app. It throw error failed to connect to server [xxxx] on first connect [Error: connect ECONNREFUSED xxx.xx.xx.xx:27017
How I can config mix between secondary and secondaryPreferred. I expect my NodeJS can start even have only one instance available, no care it's primary or secondary. When NodeJS running, if have one mongo instance down, it's auto read from other instance
"if have no primary available" is not normal state of the replica. Let the election to happen, then start your app.
Primary is a mandatory member of a replica set. All writes are going to primary. When you connect to the replica-set the driver should know where to write to, even if you don't intend to write on application level.
Once connected, your application can survive temporary loss of primary and read from secondaries.
As a side note - consider adding an arbiter. 2 nodes replica set is a recipe for disaster.
I have setup redis on three seperate instances and have configured them in such a way that 1 instance is a master and 2 are replicas of master. I have used sentinels to make sure there is high availability of the setup. I have a nodejs application which needs to use the redis. How do i achieve the read and write splitting in my application as incase my redis master goes down one of my read replica becomes the master and the writes need to go to it.
As far has I know, ioredis is the only node redis client that supports sentinels.
"ioredis guarantees that the node you connected to is always a master even after a failover. When a failover happens, instead of trying to reconnect to the failed node (which will be demoted to slave when it's available again), ioredis will ask sentinels for the new master node and connect to it. All commands sent during the failover are queued and will be executed when the new connection is established so that none of the commands will be lost."
According to this answer from an Azure Redis Cache team member, the Azure Redis Cache exposes a single endpoint. That endpoint is automatically routed to either the master or the slave node (on failover I assume). That answer also states that:
Azure... requires checks on the client side to ensure that the node is
indeed Master or Slave
So clients see a single endpoint and have to sometime check which instance they're talking to - that raises some questions:
When should a Redis client care whether it talks to the master or the slave node? Is it only to prevent inconsistency during failover, or are there other concerns here?
How (and when) should a client check whether it's connected to the master or the slave instance? Is it by running info replication?
From the docs:
When the master node is rebooted, Azure Redis Cache fails over to the replica node and promotes it to master. During this failover, there may be a short interval in which connections may fail to the cache.
My understanding is you never connect to the slave because it is never exposed to you. If the master goes out, the slave is promoted to master and that's what you reconnect to.
I'm setting up a simple 1 Master - N Slaves Redis cluster (low write round, high read count). How to set this up is well documented on the Redis website, however, there is no information (or I missed it) about how the clients (Node.js servers in my case) handle the cluster. Do my servers need to have 2 Redis connections opened: one for the Master (writes) and one towards a Slave load-balancer for reads? Does the Redis driver handle this automatically and send reads to slaves and writes to the Master?
The only approach I found was using thunk-redis library. This library supports connecting to Redis master-slave without having a cluster configured or using a sentinel.
You just simply add multiple IP addresses to the client:
const client = redis.createClient(['127.0.0.1:6379', '127.0.0.1:6380'], {onlyMaster: false});
You don't need to specifically connect to particular instance, every instance in redis cluster has information of cluster. So even if you connect to one master, your client would to be connect to any instance in the cluster. So if you try to update a key present in different master(other than the one you connected), redis client takes care of it by using the redirection provided by the server.
To answer your second question, you can enable reads from slave by READONLY command
My NodeJS client is able to connect to the MongoDB primary server and interact with it, as per requirements.
I use the following code to build a Server Object
var dbServer = new Server(
host, // primary server IP address
port,
{
auto_reconnect: true,
poolSize: poolSize
});
and the following code to create a DB Object:
var db = new Db(
'MyDB',
dbServer,
{ w: 1 }
);
I was under the impression, that when the primary goes down, the client will automatically figure out that it now needs to talk to one of the secondaries, which will be elected to be a primary.
But when I manually kill the primary server, one of the secondary servers does become the primary (as can be observed from its mongo shell and the fact that it now responds to mongo shell commands), but the client doesn't automatically talk to it. How do I configure NodeJS server to automatically switch to the secondary?
Do, I need to specify all 3 server addresses somewhere? But that doesn't seem like a good solution, as once the primary is back on line, it's IP address will be different from what it originally was.
I feel that I am missing something very basic, please enlighten me :)
Thank You,
Gary
Well your understanding is part there but there are some problems. The general premise of assigning more than a Single server in the connection is that should that server address be unavailable at the time of connection, then something else from the "seed list" will be chosen in order to establish the connection. This removes a single point of failure such as the "Primary" being unavailable at this time.
Where this is a "replica Set" then the driver will discover the members once connected and then "automatically" switch to the new "Primary" as that member is elected. So this does require that your "replica Set" is actually capable of electing a new "Primary" in order to switch the connection. Additionally, this is not "instantaneous", so there can be a delay before the new "Primary" is promoted and able to accept operations.
Your "auto_reconnect" setting is also not doing what you think it is doing. All this manages is that if a connection "error" occurs, the driver will "automatically" retry the connection without throwing an exception. What you likely really want to do is handle this yourself as you could end up infinitely retrying a connection that just cannot be made. So good code would take this into account, and manage the "re-connect" attempts itself with some reasonably handling and logging.
Your final point on IP addresses is generally addressed by using hostnames that resolve to an IP address where those "hostnames" never change, regardless of what they resolve to. This is equally important for the driver as it is for the "replica set" itself. As indeed if the server members are looking for another member by an IP address that changes, then they do not know what to look for.
So the driver will "fail over" or otherwise select a new available "Primary", but only within the same tolerances that the servers can also communicate with each other. You should seed you connections as you cannot guarantee which node is the "Primary" when you connect. Finally you should use hostnames instead of IP addresses if the latter is subject to change.
The driver will "self discover", but again it is only using the configuration available to the replica set in order to do so. If that configuration is invalid for the replica set, then it is invalid for the driver as well.
Example:
MongoClient.connect("mongodb://member1,member2,member3/database", function(err,db) {
})
Or other with an array of Server objects instead.