How do I select the master Redis pod in this Kubernetes example? - node.js

Here's the example I have modeled after.
In the Readme's "Delete our manual pod" section:
The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
How do I select the new master? All 3 Redis server pods controlled by the redis replication controller from redis-controller.yaml still have the same
labels:
name: redis
which is what I currently use in my Service to select them. How will the 3 pods be distinguishable so that from Kubernetes I know which one is the master?

How will the 3 pods be distinguishable so that from Kubernetes I know
which one is the master?
Kubernetes isnt aware of the master nodes. You can find the pod manually by connecting to it and using:
redis-cli info
You will get lots of information about the server but we need role for our purpose:
redis-cli info | grep ^role
Output:
role: Master
Please note Replication controllers are replaced by Deployments for stateless services. For stateful services use Statefulsets.

Your client Redis library can actually handle this. For example with ioredis:
ioredis guarantees that the node you connected to is always a master even after a failover.
So, you actually connect to a redis-sentinel instead of a redis-client.

We need to do the same thing and tried different things like modifying chart. Finally, just created a simple python docker that does the labeling and created chart that expose the master redis as service. This periodically checked the pods create for redis-ha and label them according to their role ( master/ slave)
It uses the same sentinel commands to find the master/slave.
helm chart redis-pod-labeler here
source repo

Related

How to add a new independent IP to a running Hazelcast cluster without restarting existed node?

Hazelcast cluster running in different hosts IP1, IP2...
hazelcast.xml configure the TCP-IP members
enter image description here
Now I want to expanding the cluster to support more service.
I install a new hazelcast in new IP3
How can I add the new IP3 to the exsiting cluster without restarting IP1, IP2?
The members section in TCP is for finding the cluster.
You list some places where cluster members may be. The process starting tries those locations, and if it gets a response the response includes the locations of all cluster members.
When scaling up you frequently won't know the location in advance. The TCP list is one solution, but there's other ways if running on the cloud, etc.
For your specific question: You don't need to add IP3 to your XML. Or you can and it be picked up the next time the processes are restarted.
If you're new to Hazelcast, why not join the community slack
Usually, you don't need to change anything, and just starting a new member (IP3) with listed IP1 and IP2 will work. The third member will join the cluster.
How does it look like under the hood (simplified):
the newly started member tries to contact addresses in its member list; after a successful connection, it sends a "Who Is Master" request and reads the master node address from the response;
the new node makes a connection to the master address (if not established already) and asks it to join the cluster;
if the join is successful, the master replies with an updated member list (cluster view) and it also sends the updated list to all other cluster members;
There were significant improvements in handling corner cases for these "incomplete address configuration" issues in Hazelcast 5.2. So if you are on an older version I strongly recommend switching to an up-to-date one.
If for any reason the default behavior is not sufficient in your case, you can also use the /hazelcast/rest/config/tcp-ip/member-list REST endpoint to make member-list changes. The endpoint was also introduced in 5.2. Find details in the documentation: https://docs.hazelcast.com/hazelcast/5.2/network-partitioning/split-brain-recovery#eliminating-unsuccessful-cluster-merges

Splitting read & write to redis with nodejs

I have setup redis on three seperate instances and have configured them in such a way that 1 instance is a master and 2 are replicas of master. I have used sentinels to make sure there is high availability of the setup. I have a nodejs application which needs to use the redis. How do i achieve the read and write splitting in my application as incase my redis master goes down one of my read replica becomes the master and the writes need to go to it.
As far has I know, ioredis is the only node redis client that supports sentinels.
"ioredis guarantees that the node you connected to is always a master even after a failover. When a failover happens, instead of trying to reconnect to the failed node (which will be demoted to slave when it's available again), ioredis will ask sentinels for the new master node and connect to it. All commands sent during the failover are queued and will be executed when the new connection is established so that none of the commands will be lost."

Azure Redis Cache - how to correctly work against a replicated instance

According to this answer from an Azure Redis Cache team member, the Azure Redis Cache exposes a single endpoint. That endpoint is automatically routed to either the master or the slave node (on failover I assume). That answer also states that:
Azure... requires checks on the client side to ensure that the node is
indeed Master or Slave
So clients see a single endpoint and have to sometime check which instance they're talking to - that raises some questions:
When should a Redis client care whether it talks to the master or the slave node? Is it only to prevent inconsistency during failover, or are there other concerns here?
How (and when) should a client check whether it's connected to the master or the slave instance? Is it by running info replication?
From the docs:
When the master node is rebooted, Azure Redis Cache fails over to the replica node and promotes it to master. During this failover, there may be a short interval in which connections may fail to the cache.
My understanding is you never connect to the slave because it is never exposed to you. If the master goes out, the slave is promoted to master and that's what you reconnect to.

Connecting to both master and slave in a replicated Redis cluster

I'm setting up a simple 1 Master - N Slaves Redis cluster (low write round, high read count). How to set this up is well documented on the Redis website, however, there is no information (or I missed it) about how the clients (Node.js servers in my case) handle the cluster. Do my servers need to have 2 Redis connections opened: one for the Master (writes) and one towards a Slave load-balancer for reads? Does the Redis driver handle this automatically and send reads to slaves and writes to the Master?
The only approach I found was using thunk-redis library. This library supports connecting to Redis master-slave without having a cluster configured or using a sentinel.
You just simply add multiple IP addresses to the client:
const client = redis.createClient(['127.0.0.1:6379', '127.0.0.1:6380'], {onlyMaster: false});
You don't need to specifically connect to particular instance, every instance in redis cluster has information of cluster. So even if you connect to one master, your client would to be connect to any instance in the cluster. So if you try to update a key present in different master(other than the one you connected), redis client takes care of it by using the redirection provided by the server.
To answer your second question, you can enable reads from slave by READONLY command

Redis sentinel - How to take a server out of loop?

I had following deployment of sentinel - 3 redis instances on different servers, 3 sentinels on each of these servers.
Now, I realized that the current master does not have much memory, so I stopped sentinel and redis instance on this particular server. And did the same setup on a new machine. SO, still I have the same deployment, 3 redis instances and 3 sentinels.
The issue is that, now sentinels are saying, master is down, as they think the master is the server which I removed. What should I do to tell sentinel that it need not include that server in loop.
From the docs about Redis Sentinel, under the chapter Adding or removing Sentinels:
Removing a Sentinel is a bit more complex: Sentinels never forget already seen Sentinels, even if they are not reachable for a long time, since we don't want to dynamically change the majority needed to authorize a failover and the creation of a new configuration number. So in order to remove a Sentinel the following steps should be performed in absence of network partitions:
Stop the Sentinel process of the Sentinel you want to remove.
Send a SENTINEL RESET * command to all the other Sentinel instances (instead of * you can use the exact master name if you want to reset just a single master). One after the other, waiting at least 30 seconds between instances.
Check that all the Sentinels agree about the number of Sentinels currently active, by inspecting the output of SENTINEL MASTER mastername of every Sentinel.
Further:
Removing the old master or unreachable slaves.
Sentinels never forget about slaves of a given master, even when they are unreachable for a long time. This is useful, because Sentinels should be able to correctly reconfigure a returning slave after a network partition or a failure event.
Moreover, after a failover, the failed over master is virtually added as a slave of the new master, this way it will be reconfigured to replicate with the new master as soon as it will be available again.
However sometimes you want to remove a slave (that may be the old master) forever from the list of slaves monitored by Sentinels.
In order to do this, you need to send a SENTINEL RESET mastername command to all the Sentinels: they'll refresh the list of slaves within the next 10 seconds, only adding the ones listed as correctly replicating from the current master INFO output.

Resources