Cassandra EC2Snitch for Mutli Region Distributions - cassandra

Ec2MultiRegionSnitch was made I assume because a few years back AWS did not support VPC peering. With the current VPC peering capabilities of AWS, we can peer two regions and set up cassandra in both of them with Ec2Snitch.
As long as they can communicate cassandra will recognize the two datacenters as different and all should be well.
Or so I thought. I saw only one case of this throughout the internet and that was this issue:
https://issues.apache.org/jira/browse/CASSANDRA-15337
Who claims he has done it.
In my case whenever I have 2 datacenters one in us and one in eu with 4 nodes each and a RF=3.
Whenever I insert data with consistency of LOCAL_QUORUM hitting a EU coordinator all is well and the data is inserted in the correct nodes ( EU nodes ) and later replicated over to the US ones. But when I insert data in the US one from the tracing I see that we hit the EU one.
Does anyone have any experience with this?

I wrote that Jira issue, and yes I did this in production and it did work.
Actually I remember seeing something similar; make sure that if you're using DCAwareRoundRobinPolicy then you're not setting a grater than 0 value in withUsedHostsPerRemoteDc, otherwise there can be "leaks" to the remote DC. Also do not call allowRemoteDCsForLocalConsistencyLevel (see https://github.com/datastax/java-driver/blob/3.0.x/driver-core/src/main/java/com/datastax/driver/core/policies/DCAwareRoundRobinPolicy.java#L311-L334 and https://docs.datastax.com/en/drivers/java/2.0/com/datastax/driver/core/policies/DCAwareRoundRobinPolicy.Builder.html)

Related

How to properly connect client application to Scylla or Cassandra?

Let's say I have a cluster of 3 nodes for ScyllaDB in my local network (it can be AWS VPC).
I have my Java application running in the same local network.
I am concerned how to properly connect app to DB.
Do I need to specify all 3 IP addresses of DB nodes for the app?
What if over time one or several nodes die and get resurrected on other IPs? Do I have to manually reconfigure application?
How is it done properly in big real production cases with tens of DB servers, possibly in different data centers?
I would be much grateful for a code sample of how to connect Java app to multi-node cluster.
You need to specify contact points (you can use DNS names instead of IPs) - several nodes (usually 2-3), and driver will connect to one of them, and will discover the all nodes of the cluster after connection (see the driver's documentation). After connection is established, driver keeps the separate control connection opened, and via it receives the information about nodes that are going up & down, joining or leaving the cluster, etc., so it's able to keep information about cluster topology up-to-date.
If you're specifying DNS names instead of the IP addresses, then it's better to specify configuration parameter datastax-java-driver.advanced.resolve-contact-points as true (see docs), so the names will be resolved to IPs on every reconnect, instead of resolving at the start of application.
Alex Ott's answer is correct, but I wanted to add a bit more background so that it doesn't look arbitrary.
The selection of the 2 or 3 nodes to connect to is described at
https://docs.scylladb.com/kb/seed-nodes/
However, going forward, Scylla is looking to move away from differentiating between Seed and non-Seed nodes. So, in future releases, the answer will likely be different. Details on these developments at:
https://www.scylladb.com/2020/09/22/seedless-nosql-getting-rid-of-seed-nodes-in-scylla/
Answering the specific questions:
Do I need to specify all 3 IP addresses of DB nodes for the app?
No. Your app just needs one to work. But it might not be a bad idea to have a few, just in case one is down.
What if over time one or several nodes die and get resurrected on other IPs?
As long as your app doesn't stop, it maintains its own version of gossip. So it will see the new nodes being added and connect to them as it needs to.
Do I have to manually reconfigure application?
If you're specifying IP addresses, yes.
How is it done properly in big real production cases with tens of DB servers, possibly in different data centers?
By abstracting the need for a specific IP, using something like Consul. If you wanted to, you could easily build a simple restful service to expose an inventory list or even the results of nodetool status.

How to perform backup and restore of Janusgraph database which is backed by Apache Cassandra?

I'm having trouble in figuring out on how to take the backup of Janusgraph database which is backed by persistent storage Apache Cassandra.
I'm looking for correct methodology on how to perform backup and restore tasks. I'm very new to this concept and have no idea on how to do this. It will be highly appreciated if someone explain the correct approach or point me to rightful documentation to safely execute the tasks.
Thanks a lot for your time.
Cassandra can be backed up a few ways. One way is called a "snapshot". You can issue this via "nodetool snapshot" command. What cassandra will do is to create a "snapshots" sub-directory, if it doesn't already exist, under each table that's being "backed up" (each table has its own directory where it stores its data) and then it will create the specific snapshot directory for this particular occurrence of the snapshot (either you can name the directory with the "nodetool snapshot" parameter or let it default). Cassandra will then create soft links to all of the sstables that exist for that particular table - looping through each table, keyspace or database - depending on your "nodetool snapshot" parameters. It's very fast as creating soft links takes almost 0 time. You will have to perform this command on each node in the cassandra cluster to back up all of the data. Each node's data will be backed up to the local host. I know DSE, and possibly Apache, are adding functionality to back up to object storage as well (I don't know if this is an OpsCenter-only capability or if it can be done via the snapshot command as well). You will have to watch the space consumption on this as there are no processes to clean these up.
Like many database systems, you can also purchase/use 3rd party software to perform backups (e.g. Cohesity (formally Talena), Rubrik, etc.). We use one such product in our environments and it works well (graphical interface, easy-to-use point-in-time recoveryt, etc.). They also offer easy-to-use "refresh" capabilities (e.g. refresh your PT environment from, say, production backups).
Those are probably the two best options.
Good luck.

Create Windows 2012 failover cluster with different networks

I have two servers in different networks, one in USA and another in Germany.
Can I put together in a failover cluster?
Yes, no issue. you can also have SQL Server AG as well. Make sure you have the firewalls opened properly between these two servers. And also you need to have a common DNS/AD for the failover cluster to work. Also have a 3rd server or cluster disk for quorum or common data files.

Cassandra data clone to another cassandra database(different servers)

My question is above mentioned, have a cassandra database and wanted to use another server with this data. How can i move this all keyspace's data?
I have snapshots but i dont know can i open it to another server.
Thanks for your helps
Unfortunately, you have limited options to move data across clouds primarily COPY command or sstableloader (https://docs.datastax.com/en/cassandra/2.1/cassandra/migrating.html) or if you plan to maintain a like-to-like setup (same number of nodes) across clouds then simply copying snapshots under data would work.
If you are moving to IBM Softlayer, you maybe able to use software-defined storage solutions that get deployed on bare metal and provide features like thin clones that will allow you to create clones of cassandra clusters in matter of minutes and provide incredible space savings. This is rather useful for creating clones for dev/test purposes. Checkout Robin Systems, you may find them interesting.
The cleanest way to migrate your data from one cluster to another is using the sstableloader tool. This will allow you to stream the contents of your sstables from a local directory to a remote cluster. In this case the new cluster can also be configured differently and you also don't have to worry about assigned tokens.

Weird RIAK behavior

I'm a sysadmin and I managing 5 cluster of RIAK:
two of them are LXC containers on the same physical machine (3 nodes per cluster)
one of them are LXC containers located on different physical machines (6 nodes)
one of them are LXC containers located on different physical machines and XEN VMs (6 nodes)
and the last of them are VMware ESX VMs (3 nodes)
Our application works correctly on the first four clusters, but it doesn't work as we expected on the last one.
When we update a key and we retrieve this key in order to write it again, it has an old value (it doesn't have the first value that we wrote), for example:
The key has: lalala
We retrieve the key, and add lololo, so it should be lalala,lololo
We retrieve the key again, and try to add lelele, so it should be now: lalala,lololo,lelele, but when we retrieve it again, we only have: lalala,lelele
In the second write action, when we retrieve the key, we obtained a key with the old value. We set r, w, pr and rw to 3 to the REST requests, but it doesn't help.
All the configuration files are very similiar and we don't have any major differences in the disk I/O and network performance of the nodes of the clusters.
Did anyone have a similar issue?
Regards.
I resolved the issue. VMware have some issues with the clocks, they were synchronized. The servers doesn't have a direct internet connection, but they can use a HTTP proxy. I installed and configured htpdate on these servers in order to synchronize theirs clocks through HTTP headers.

Resources