Member in Hazelcast Map in removed when adding new cluster node - hazelcast

I am using Hazelcast version 3.9.1 and when tried to add a new node many map Members are removed.
We are using Discovering Members by TCP and adding all IP's in the new node and then restart Hazelcast service.
What is a reason of removing map member?

In a running cluster, Map data is not removed but repartitioned i.e. data is rebalanced across the cluster. In other words, when a new member is added to the cluster, some of the existing entries move to the new member.
If you only wanted to add members to an existing cluster then there is no need to restart, Hazelcast automatically rebalances the data.
If there are no backups enabled and you restart the cluster, then the data is lost as data is stored in memory. To prevent data loss, enabled backup (default is 1 synchronous backup enabled) and do not shut down the entire cluster at once.

As wildnez said in his response, adding a new node will cause rebalancing to happen; so while map entries may be removed from the map of a particular node, they will remain in the cluster.
As a concrete example, if you are using the default of 1 backup, and start up a cluster with one node and 1000 entries, then all 1000 entries will be in the map of that node. There will be no backups in that case because even though you specified 1 backup, backups won't be stored on the same node as their primary.
If you add a second node, then approximately 50% of the map entries will be migrated to the new node. For the entries that migrate, their backups will be added on the first node, and for the map entries that don't migrate, their backups will be created on the second node.
If you add a third node, migration will happen again, and each node will have roughly one third of the primary entries and one third of the backups.
Adding a node should never cause map entries to be removed from the cluster, but they will cause migration of entries between cluster members.

Related

Cassandra cluster - Migrating all hosts in cluster

I am using Cassandra(3.5) with 20 nodes with data center-1 with 10 nodes and data center-2 with 10 nodes and has huge data. All hosts belong to say legacy hosts. Now we have newer generation hosts say generation-2.
I have tried adding new nodes and decommissioning old node. But this will be tie consuming.
Q1: How can I migrate all hosts from legacy hosts to generation-2 host? What is the best approach for that?
Q2: What will be rollback strategy?
Q3: Finally, How can I validate data once I migrate to generation-2 hosts?
If you just replacing the nodes with newer hardware, keeping the same number of nodes, then it's simple (operations should be done on every node):
prepare the new installation on every node, with configuration identical to existing nodes, but with different IP addresses but don't start the nodes
(optional) disable autocompaction with nodetool disableautocompaction - this could help to execute step 5 faster
copy data from old node to new node using rsync (this could take long time)
execute nodetool drain & stop old node
use rsync to synchronize changes happened since initial copying (it should be relatively fast)
make sure that the old node won't start again (for example, remove Cassandra package) - otherwise it could be a chaos
start the new node
This works because Cassandra node is identified by the UUID that is stored in the local table, so changing of IP doesn't affect the operations.
P.S. In future, if you'll need to replace node (not as described, but completely died), use the procedure of node replacement - in this case, you won't stream data twice, as happened when you do decomissioning and then re-adding node.

CouchDB cluster database distribution

We have a CouchDB cluster with 24 nodes, with q=8 and n=3as default cluster settings, and 100 databases already created. If we added 24 more nodes to the cluster and started creating new databases, would they be created in the new nodes or not necessarily? How does CouchDB make the decision of where to put the new databases?
We are running all nodes in 2.3.1
By default CouchDB will assign the shards for databases across all the nodes in the cluster randomly, so your new databases will have shards on both the new nodes and the old ones. It will spread the shards across as many nodes as possible, and guarantee that two replicas of a shard are never co-located on the same node.
If you wanted to have shards hosted only on the new nodes (e.g. because the old ones are filling up) you could take advantage of the “placement” feature. There are two steps involved:
Walk through the documents in the _nodes database and set a “zone” attribute on each document, e.g. “zone”:”old” for your old nodes and “zone”:”new” for the new ones.
Define a [cluster] placement config setting that tells the server to place 3 copies of each shard in your new zone:
[cluster]
placement = new:3
You could also use this feature for other purposes; for example, by splitting your nodes up into zones based on rack location, availability zone, etc. to ensure that replicas have maximum isolation from each other. You can read more about the setting here:
http://docs.couchdb.org/en/2.3.1/cluster/databases.html#placing-a-database-on-specific-nodes

Adding new node cassandra 3.x

We are in the process of adding new nodes to our existing cassandra 3.x cluster. Basically following the steps outlined in this article (https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_node_to_cluster_t.html).
Currently, out 3.x cluster does incremental repairs. What I'm not 100 percent sure about is if we need to do anything after we do the node cleanup. Specifically, are our new added nodes setup to do incremental repairs after following the procedure listed above?
Thanks
Marshall
Adding a node involves rebalancing the token distribution among nodes in the cluster and bootstrapping the new node. Repair is a regular maintenance process and it's not necessarily needed in the adding node process.
For the token redistribution part, simplified example is if your have 20 tokens and 4 nodes initially. If random enough, each node would be the primary node for 5 tokens. When add a node and change the configuration, 20 tokens would be distributed to 5 nodes and each node would be the primary node for 4 tokens. When you run bootstrap to add the new node, the new node knows what tokens belongs to it and it will stream missing tokens from other existing nodes. After bootstrapping is done, nodetool cleanup on the existing nodes would remove tokens which don't belong to them anymore. To sum up, bootstrapping new node would redistribute tokens and stream objects to new nodes based on the distribution. you don't need repair in this process to stream data. cleanup would remove objects which the ownership is changed.
However, repair is a regular process to guarantee data consistency and incremental or full options are out of scope in terms of talking about adding node.
Good reference to read under the hood on what happens when you change the topology of a cluster

Cassandra ring configuration

I am trying to connect Apache Cassandra nodes into a ring. They are not Datastax versions, but Cassandra 1.2.8 from the Apache website. When trying to add one as the seed of the other I get following exception:
Unable to find compaction strategy class 'com.datastax.bdp.hadoop.cfs.compaction.CFSCompactionStrategy'
Before that I change the "listen_address" and "rpc_address" to local IP address of each node. Next step I add one IP as a seed to another node. The nodes start up, an exception is printed but both nodes run fine until restart. After restarting either node the exception is printed and nodes do not run.
This is very strange - I do not have any DSE components.
Did you previously use any DSE components? If you did and are using the same data directory on any of your nodes, it may find old column families that were created with this compaction strategy. If you have no data you want in the data directories on all your nodes, you should clear them by stopping all nodes, deleting the directories, then starting the nodes.
Or if you have any DSE nodes still up, they may be joining the new cluster and propagating their schema, so creating column families with this compaction strategy. You can find out by looking in the logs and seeing which nodes try to connect. If any aren't from your 1.2.8 ring then this is probably the cause.
That error means you either had a DSE Analytics node in your ring at some point, or you restored your schema from someplace that had an Analytics node.
I would check if you have the folder /etc/dse/ on your VM, that would mean DSE was installed there.
To just wipe the node and start from scratch schema wise, you can stop the node, remove the /system/schema_* folders, then start the node. When it starts it will have no schema. Re-create any keyspaces/column families you had before, and they will get read from disk.

Adding a new node to existing cluster

Is it possible to add a new node to an existing cluster in cassandra 1.2 without running nodetool cleanup on each individual node once data has been added?
It probably isn't but I need to ask because I'm trying to create an application where each user's machine is a server allowing for endless scaling.
Any advice would be appreciated.
Yes, it is possible. But you should be aware of the side-effects of not doing so.
nodetool cleanup purges keys that are no longer allocated to that node. According to the Apache docs, these keys count against the allocated data for that node, which can cause the auto bootstrap process for the next node to not properly balance the ring. So depending on how you are bringing new user machines into the ring, this may or may not be a problem.
Also keep in mind that nodetool cleanup only needs to be run on nodes that lost keyspace to the new node - i.e. adjacent nodes, not all nodes, in the cluster.

Resources