Can cassandra datacenters configured to replication receiver only? - cassandra

Assuming that we have 2 cassandra datacenters.
One of them is productive environment and well-secured, the other one is a test environment and easier to break, hence non-trusted.
We want data replication, but only propagated from the productive environment to test environment, not vice versa.
Is there any way to configure one data-center as a slave: not to receive replication data from the other one, and to revert the untrusted changes? It should be a read-only instance, which only receives data from the other datacenter.
In case somebody breaks the test environment, we do not want to productive environment to receive any manipulated data. Target would be that the test environment changes get reverted to the productive environment during replication.

No, it's not possible directly - in Cassandra changes made to the keyspace are propagated to all sides.
You can try different options by using separate clusters for prod and test:
Implement code to read CDC files, and apply to test cluster - this won't help with deleting the data from test environment, as this approach only apply changes.
Use DataStax advanced replication (that uses similar approach)
periodically replay the data from production to test using SSTableLoader - it will replay all data, so it will help with deletion of data on test. But it could take quite a long time if you have a lot of data.

Related

How to make Cassandra nodes have the same data?

I have two computers each one being a Cassandra node and they communicate well with each other.
From what I understand Cassandra will replicate the data to each other but will always query certain portions from one of them.
I would like though to have the data being copied to each other so they have the same data but they only use data from the local node. Is that possible?
The background reason is that the application in each node keeps generating and downloading a lot of data and at the same time both are doing some CPU super intensive tasks. What happens is that one node saves the data and suddenly can't find it anymore because it has been saved in the other node which is busy enough to reply with that data.
Technically, you just need to change replication factor to a number of nodes, and set your application to always read from a local node using whitelist load balancing mode. But it may not help you because if your nodes are very busy, replication of data from another node may also not happen, so the query will fail as well. Or replication will add an additional overhead making the situation much worse.
You need to rethink your approach - typically, you need to separate application nodes from database nodes, so application processes doesn't affect database processes.

Fire triggers in replica nodes (cassandra)

I am using a Cassandra 4-node cluster with full replication in all nodes.
I have defined a trigger on a table. However, when I update a row in this table, trigger is fired only on the local node.
Is there any way to fire this trigger in all nodes (based on replication)?
Triggers run on the coordinator before they are passed off on be applied. To see it on a per replica the best way is to use CDC (which is also more reliable than triggers) and follow the changes as they are flushed to commitlog.
With CDC you have to solve another problems:
validate order of the pockets, since it is not guaranteed
make tradeoff between single point of failure vs implementing tool for CDC logs duplication checker, let me explain:
You either enable CDC logging on one node and this will become your bottleneck. Or you enable CDC on all nodes and then you have to somehow manage data duplication since leader will send logs to repications.
You can deploy triggers on every node of your cluster. It won't cause any data duplication and works perfectly fine.

Can we backup only one availability zone for AZ replicated Cassandra cluster

Since my Cassandra cluster is replicated across three availability zones, I would like to backup only one availability zone to lower the backup costs. I have also experimented restoring nodes in a single availability zone and got back most of my data in a test environment. I would like to know if there are any drawbacks to this approach before deploying this solution in production. Is anyone following this approach in your production clusters?
Note: As I backup at regular intervals, I know that I may loose updates happened to other two AZ nodes quorum at the time of snapshot but that's not a problem.
You can backup only specific dc, or even nodes.
AFAIK, the only drawback is does your data consistent/up-to-date, and since you can afford to lose some data it shouldn't be a problem. And if you, for example performing writes with ALL consistency level, the data should be up-to-date on all nodes.
BUT, you must be sure that your data is indeed replicated between multi a-z, by playing with rack/dc properties or using ec2 switch that supports multi a-z.
EDIT:
Global Snapshot
Running nodetool snapshot is only run on a single node at a time.
This only creates a partial backup of your entire data. You will want
to run nodetool snapshot on all of the nodes in your cluster. But
it’s best to run them at the exact same time, so that you don’t have
fragmented data from a time perspective. You can do this a couple of
different ways. The first, is to use a parallel ssh program to
execute the nodetool snapshot command at the same time. The second,
is to create a cron job on each of the nodes to run at the same time.
The second assumes that your nodes have clocks that are in sync, which
Cassandra relies on as well.
Link to the page:
http://datascale.io/backing-up-cassandra-data/

Strategies for Cassandra backups

We are considering additional backup strategies for our Cassandra database.
Currently we use:
nodetool snapshot - allows us to take a snapshot of all data on a given node at any time
replication level in itself is a backup - if any node ever goes down we still have additional copies of the data
Is there any other efficient strategy? The biggest issue is that we need a copy of life data in additional datacenter, which has just one server. So, for example let's say we 3 Cassandra servers, that we want to replicate to one backup Cassandra server in a second datacenter. the only way I can think we can do it is to set the replication level to 4. That means all the data will have to be written to all servers at all times.
We are also considering running Cassandra on top of some network distributed system like HDFS. Correct me if I'm wrong but that would be a very inefficient use of Cassandra?
Are there any other efficient backup strategies for Cassandra that can backup the current data without impairing it's performance?

Set cluster name when using Cassandra CQL/JDBC driver

I'm using the Cassandra CQL/JDBC driver I got from google code but it doesn't seem to let me provide a cluster name - is there a way?
I'm using cluster names to ensure I don't run commands against a live system, it has a different cluster name to my dev systems.
Edit: Just to clarify, I have two totally separate Cassandra clusters, one live and one for test. They have different cluster names to ensure that I don't accidentally run test code meant for the test cluster on the live cluster. Therefore any client I need to use must let me set a cluster name. Hector does this.
There is no inbuilt protection for checking cluster names for Cassandra clients. It is built to ensure nodes from different clusters don't try and join together but not to ensure clients connect to the right cluster. It would be possible to add this checking to a client though (since the cluster name is exposed to the client) but I'm not aware of any clients doing this.
I'd strongly recommend firewalling off your different environments to avoid this kind of mistake. If that isn't possible, you should choose different ports to avoid confusion. Change this with the 'rpc_port' setting in cassandra.yaml.
You'd have to mirror the data on two different clusters. You cant access the same cluster with different names.
To rename your cluster (from the default 'Test Cluster') you edit the cassandra configuration file found in location/of/cassandra/conf/cassandra.yaml. Its the top line, if you need more details look at the datastax configuration documentation and explanation.

Resources