Can I set cassandra cluster with vnode using in one datacenter and another not - cassandra

I had a cassandra cluster with one datacenter and it is upgraded from 1.0.7 to 2.1.8. However, the old datacenter does not use vnode settings because the 1.0.7 does not support it.
Right now, I want to add a new datacenter of 2.1.8 version and want to use vnode settings in the new data center. Can I keep the old datacenter not using vnode and the new datacenter with vnode settings?

You could try the procedure listed here.
"You cannot directly convert a single-token nodes to a vnode. However, you can configure another data center configured with vnodes already enabled and let Cassandra automatic mechanisms distribute the existing data into the new nodes. This method has the least impact on performance."

Related

Removing DC from multi DC cluster in Cassandra

I have a two datacenter site (dc1 and dc2). I am writing with replication of 3 (dc1:3 , dc2:3) on dc1. dc2 is backup site taking no traffic. I upgraded all the nodes of dc2 to C* version 3.11.2. Nodes of dc1 are on C* version 2.1.16. Now due to some issue I have to rollback my upgrade. I have two options
Data backup restore the complete site (dc1 and dc2) - It will cause a lot of data loss.
Remove dc2 from dc1 using steps given here.
Is there any issue in removing a site(dc2) in case of mixed C* versions?
If it were me, I would:
Take DC2 out of replication.
Shutdown nodes on DC2.
Remove the nodes/assassinate them.
Uninstall C* completely.
Wipe the nodes of all data/logs/configuration.
Install C* and reconfigure.
Add nodes to a new DC.
This means there's no data loss by having to restore from backups. Cheers!
Yes, second option is seems to be good and you can recover your data safely. You should remove DC2 datacenter from your existing cluster. As you are saying no traffic on DC2 so it could be easy to performing addition and removal operation.
You need to follow the steps as below.
Change the replication factor of keyspaces.
Stop the Cassandra services on DC2.
You can remove the nodes from existing cluster via nodetool removenode command if it is creating a issue you can use assassinate.
Once node removed from the cluster one by one you need to uninstall the Cassandra there.
Remove existing data on removed node completely.
Then, you need to install fresh Cassandra there based on previous configuration, you can refer config files from existing cluster or you took a backup for you config on 2.1.16
Now, you need to add your datacenter again on cluster.
In this way, you can easily get your datacenter and data quickly.
You can refer the documentation here for any confusion in addition
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/operations/opsAddDCToClusterDesigDC.html

Multi DC replication between different Cassandra versions

We have an existing Cassandra cluster (3.0.9) running on production.
Now ,we want to create data pipelines to ingest data from Cassandra and persist in hadoop. We are thinking of using CDC feature (available from Cassandra 3.8) along with Kafka Connect.
We are thinking of creating a new read-only DC which will replicate data from the Production DC.This new DC will be running the latest Cassandra version (3.8+) with CDC enabled.
My questions:
For replication to work, do we need both dc's running same version of Cassandra? Can't we achieve this without upgrading the DC used by the service?
Is it possible to enable CDC feature only in the new read-only DC?
UPDATE :
More information from C* mailing list https://lists.apache.org/thread.html/r9e705895c480f264998c29cf69c0eb2296382049467e31c447f676c7%40%3Cuser.cassandra.apache.org%3E
I think, it should be the same version as existing DC for replication of the data by adding a DC. you may refer below recommended document below for adding new datacenter in existing cluster.
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/operations/opsAddDCToCluster.html
You should upgrade the existing DC from lower to upper version of Cassandra to get expected feature.
You can make your DC as read only without sending any direct traffic in the new DC. all connection should be on older DC.

Creating new datacenter with Datastax OpsCenter

I'd like to enable vnodes on my cassandra cluster, which has an Analytics dc and a regular Cassandra dc. I am using OpsCenter 5.0.1 and DSE 4.5. My question is: how can I create a new dc with OpsCenter, with vnodes enabled, so I can transfer my data over from my existing dc's. I am following the instructions on this page, but surely I don't have to manually edit the config file on every node, to enable a new datacenter, right? Any help much appreciated.
Unfortunately OpsCenter's automated provisioning doesn't currently support creating multi-dc clusters or adding data centers to existing clusters. We know this is important functionality that's missing, and are working on making that available as soon as we can.

Ability to write to a particular cassandra node

Is there a possibility to write to a particular node using datastax driver?
For example, I have three nodes in datacenter 1 and three nodes in datacenter 2.
Existing
If i build up the cluster with any one of them as seed, all the nodes will get detected by the datastax java driver. So, in this case, if i insert a data using driver, it will automatically choose one of the nodes and proceed with it as the co-ordinator(preferably local data center)
Requirement
I want a way to contact any node in datacenter 2 and hand over the co-ordinator job to one of the nodes in datacenter 2.
Why i need this
I am trying to use the trigger functionality from datacenter 2 alone. Since triggers are taken care by co-ordinator , i want a co-ordinator to be selected from datacenter 2 so that data center 1 doesnt have to do this operation.
You may be able to use the DCAwareRoundRobinPolicy load balancing policy to achieve this by creating the policy such that DC2 is considered the "local" DC.
Cluster.Builder builder = Cluster.builder().withLoadBalancingPolicy(new DCAwareRoundRobinPolicy("dc2"));
In the above example, remote (non-DC2) nodes will be ignored.
There is also a new WhiteListPolicy in driver version 2.0.2 that wraps another load balancing policy and restricts the nodes to a specific list you provide.
Cluster.Builder builder = Cluster.builder().withLoadBalancingPolicy(new WhiteListPolicy(new DCAwareRoundRobinPolicy("dc2"), whiteList));
For multi-DC scenarios Cassandra provides EACH and LOCAL consistency levels where EACH will acknowledge successful operation in each DC and LOCAL only in local one.
If I understood correctly, what you are trying to achieve is DC failover in your application. This is not a good practice. Let's assume your application is hosted in DC1 alongside with Cassandra. If DC1 goes down, your entire application is unavailable. If DC2 goes down, your application still can write with LOCAL CL and C* will replicate changes when DC2 is back.
If you want to achieve HA, you need to deploy application in each DC, use CL=LOCAL_X and finally do failover on DNS level (e.g. using AWS Route53).
See data consistency docs and this blog post for more info about consistency levels for multiple DCs.

GridGain open source datacenter topology specification

GRIDGAIN DATA-CENTER REPLICATION
A few specific questions regarding the recently open-sourced Gridgain code. The gridgain.org support link says datacenter replication is not enabled for the open-source version. Is this true or false.
More imporatantly, assuming the open-source version has the datacenter feature enabled, how do we go about specifying the topology and activating the replication.
For example, the official documentation suggest to create/set a GridDrSenderCacheConfiguration, GridDrSenderHubConfiguration with details of the topology. I did this but it didnt seem to enable any cross data center replication.
More specifically, I did the following:
assign a dataCenterId byte parameter in the config.xml for gridgain.
...
define those nodes that are part of that datacenter under the
... add ip addresses of nodes
Define above for each node in each datacenterl appropriately. In the gridgain java client code, initiate a gridgain instance and set the GridDrSenderCacheConfiguration,GridDrSenderHubConnection (along wtih the GridDrSenderHubConnectionConfiguration) as specified in the docs for each node in each datacenter and also using a dummy GridDrReceiverHubConfiguration object (all defaults)
However this does not seem to do any replication across the data centers.
Would someone from the GridGain team please give some examples of setting up the data center replication, How to setup the config.xml, and enable in the java code when instantiating a gridgain instance.
Also, I am trying to avoid intra-datacenter replication by setting the gridDrSenderHubConnectionConfiguration.setIgnoredDataCenterIds(localDC); paramter to avoid replicating if the datacenter is
Just confirmed. Since data center replication is not present in open source version, no replication would happen in this case. Please download eval version of GridGain enterprise edition and try it out.

Resources