VoltDB's web site shows that the community edition supports full ACID (which includes the D for durability) but it doesn't look like it supports the crash recovery which seems related to the command logging. Is there a different in the 'D' part of durability between the community edition and the commercial versions? If the machine goes down are all data lost?
Both VoltDB distributions - Community Edition and Enterprise Edition - support durability through database snapshots, which can be executed ad hoc, at admin-defined frequencies or continuously. Snapshots are written to permanent storage. Database recovery (durability) is achieved by restoring from snapshots.
VoltDB Enterprise Edition also includes a feature called Command Logging, which provides durability for transactions that occur in the (typically brief) intervals between snapshots. Command Logging can be configured to run synchronously (100% durability guarantee) or asynchronously (less of an impact on transaction latencies at the cost of losing some transactions during a crash). If asynchronous logging is used, the fsync window can be configured to balance latency and durability objectives.
In summary, both distributions of VoltDB support durability via snapshotting, and Enterprise Edition provides additional durability via Command Logging.
It should be noted that both distributions of VoltDB also include built-in high availability via a synchronous multi-master feature called k-safety. You can maintain as many "masters" of each database node as you wish, and VoltDB will transparently (and synchronously) apply transactions to all appropriate nodes. If one node crashes, its peer(s) simply continue to accept/process work. This "Tandem-style" fault tolerance significantly reduces the likelihood of experiencing an outage that requires database recovery.
Read more about VoltDB snapshots: http://community.voltdb.com/docs/UsingVoltDB/SaveSnapshotAuto
Read more about VoltDB command logging: http://community.voltdb.com/docs/UsingVoltDB/ChapCmdLog
Related
I'm working with Cassandra 3.x and Phantom driver (scala), and modifying my Cassandra deployment from a simple, three nodes cluster to a multi datacenter Cassandra deployment that consists of two datacenters:
Transactional - the "main" datacenter, to which all reads/writes occur (except for reads/writes done by some analytics job).
Analytics - a datacenter used for analytics purposes only. The analytics job should operate (i.e. read/write to) on this datacenter.
I configured the client on the analytics job to read/write to the analytics data-center, and all other services to read/write from the transactional data-center.
How can I check the client actually behaves as expected - and reads/writes the data to the correct data-center?
The driver has an option allowing you to turn on tracking. That should allow you to see which nodes are involved with each query.
There's a short description of how to do this on the driver documentation page: https://docs.datastax.com/en/developer/java-driver/4.2/manual/core/logging/
The query logger reference API has a lot more detail on the available methods, and can even show the values of bind vars, if needed.
Given the quote from Apache KUDU official documentation, namely: https://kudu.apache.org/overview.html
Kudu isn't designed to be an OLTP system, but if you have some subset
of data which fits in memory, it offers competitive random access
performance. We've measured 99th percentile latencies of 6ms or below
using YCSB with a uniform random access workload over a billion rows.
Being able to run low-latency online workloads on the same storage as
back-end data analytics can dramatically simplify application
architecture.
Does this statement imply that KUDU can be used for replication from a JDBC source - the simplest form possible?
Elsewhere I have used KUDU for replicating to from SAP and other COTS, so that reports could run against the KUDU tables as opposed to Hana. That was an architecture decided upon by others.
For pure replication of data, primarily for subsequent extractions from a Data Lake, for data with embellished history with a size < 1TB, this is feasible as well. Cloudera confirmed this after discussion. Even though KUDU has a columnar format and a row format would be desirable, it simply works as well.
I am using Datastax Cassandra 4.8.16. With cluster of 8 DC and 5 nodes on each DC on VM's. For last couple of weeks we observed below performance issue
1) Increase drop count on VM's.
2) LOCAL_QUORUM for some write operation not achieved.
3) Frequent Compaction of OpsCenter.rollup_state and system.hints are visible in Opscenter.
Appreciate any help finding the root cause for this.
Presence of dropped mutations means that cluster is heavily overloaded. It could be increase of the main load, so it + load from OpsCenter, overloaded system - you need to look into statistics about number of requests, latencies, etc. per nodes and per tables, to see where increase happened. Please also check the I/O statistics on machines (for example, with iostat) - sizes of the queues, read/write latencies, etc.
Also it's recommended to use a dedicated OpsCenter cluster to store metrics - it could be smaller size, and doesn't require an additional license for DSE. How it said in the OpsCenter's documentation:
Important: In production environments, DataStax strongly recommends storing data in a separate DataStax Enterprise cluster.
Regarding VMs - usually it's not really recommended setup, but heavily depends on what kind of underlying hardware - number of CPUs, RAM, disk system.
I am planning to setup a 80 nodes cassandra cluster (current version 2.1 but will upgrade to 3 in future).
I have gone though http://graphite.readthedocs.io/en/latest/tools.html which has list of tools that graphite supports.
I want to decide which tools to choose as listener and storage so that it could scale.
As a listener should i use the default carbon or should i choose graphite-ng ?
However as storage component, i am confused that whether default whisper is enough? Or should I look at ohter option (like Influxdata,cynite or some rdms db (postgres/mysql))?
As gui component i have finalized to use grafana for better visulization.
I think datadog + grafana will work fine but datadog is not opensource.So Please suggest an opensource scalable up to 100 cassandra nodes alternative.
I have 35 Cassandra nodes (different clusters) monitored without any problems with graphite + carbon + whisper + grafana. But i have to tell that re-configuring collection and aggregations windows with whisper is a pain.
There's many alternatives today for this job, you can use influxdb (+ telegraf) stack for example.
Also with datadog you don't need grafana, they're also a visualizing platform. I've worked with it some time ago, but they have some misleading names for some metrics in their plugin, and some metrics were just missing. As a pros for this platform, it's really easy to install and use.
We have a cassandra cluster of 36 nodes in production right now (we had 51 but migrated the instance type since then so we need less C* servers now), monitored using a single graphite server. We are also saving data for 30 days but in a 60s resolution. We excluded the internode metrics (e.g. open connections from a to b) because of the scaling of the metric count, but keep all other. This totals to ~510k metrics, each whisper file being ~500kb in size => ~250GB. iostat tells me, that we have write peaks to ~70k writes/s. This all is done on a single AWS i3.2xlarge instance which include 1.9TB nvme instance storage and 61GB of RAM. To fully utilize the power of the this disk type we increased the number of carbon caches. The cpu usage is very low (<20%) and so is the iowait (<1%).
I guess we could get away with a less beefy machine, but this gives us a lot of headroom for growing the cluster and we are constantly adding new servers. For the monitoring: Be prepared that AWS will terminate these machines more often than others, so backup and restore are more likely a regular operation.
I hope this little insight helped you.
I have read about Cassandra backup & recovery here, and have a few questions:
Do the native Cassandra CLI commands suffice? I see a lot of people writing scripts and custom-making their own solutions.
What other tools out there would you recommend for Cassandra backup and recovery? I am looking for something that can help me manage the backup images (e.g. with point-in-time)
Do I need to invest significantly more into storage if I opt to backup my Cassandra tables?
Any insights would be appreciated.
Please try to limit your questions to one actual question.
Do the native Cassandra CLI commands suffice?
I assume that you mean nodetool snapshot, so for the most-part, "yes." In addition, many users choose to also enable incremental backups. With a combination of using snapshots and incremental backups (from the linked doc) "provides a dependable, up-to-date backup mechanism."
I see a lot of people writing scripts and custom-making their own solutions.
I have a backup script that runs on my nodes nightly. There are two reasons for this.
I don't want to have to manually take a snapshot for each keyspace every week, so I have the script do it.
Snapshot and incremental backup files don't remove themselves, so I have the script do that after a certain time threshold.
What other tools out there would you recommend for Cassandra backup and recovery?
DataStax OpsCenter allows you to schedule backups, but I believe that is only a valid option in the Enterprise edition. You could also look at Netflix's Cassandra backup/recovery tool called Priam. There's also a company called Talena which claims to provide an extensive enterprise-grade backup solution for Cassandra (I don't know anyone who uses them, but they hit me with a marketing email recently so I thought I'd mention it).
Do I need to invest significantly more into storage if I opt to backup my Cassandra tables?
Incremental backups and snapshots can take up a great deal of space if you don't stay on top of them (deleting and/or archiving them). I would try them both out, and keep an eye on your disk usage while you do. If your business requirements have a statement on terms of service (how far back you would need to be able to restore to), you should be able to figure out how many days-worth of backups it makes sense for you to keep around. That should tell you whether or not you need more disk to fulfill those obligations.
Edit 20181205
Do you run nodetool snapshot on each node? What would be the approach if there are three nodes with 100% replication.
Typically yes, nodetool snapshot needs to be run on each node. This helps to ensure backup coverage, as not all of the nodes may be responsible for all of the data.
However, if your cluster runs in a configuration where number of nodes equals your RF, then each node has a complete copy of the data. In that case, you would need to run nodetool snapshot on only one node; as long as you are confident that repairs are running regularly and your data is consistent.
With regards to point-in-time backup and recovery of Cassandra, there are a few aspects that you need to consider depending on what your needs and limitations are:
Storage Footprint
All the solutions available today will put a big strain on your infrastructure as they would require you to store 3x the data that you absolutely need to, assuming you have a replication factor of 3.
I agree with #Aaron, you need to manage the snapshots yourself because the tools will not do “garbage collection” for you :)
Failure resiliency
All the solutions out there, opscenter and others, provide limited failure resiliency. You will lose data if a Cassandra node goes down during a backup window.
This situation is exasperated when you have incremental backups and node failure happens during an incremental
Recovery time/speed
Note that you may have to go through a “repair” process during recovery. This is needed because the node level snapshots that the native tools provide are not consistent across the cluster.
Depending on your RTO/RPO needs, this may not be adequate. I suggest you test both the backup and recovery times for your operations before you arrive at any solution.
If you are looking for enterprise grade solution for backup and recovery of Cassandra, you may want to check out the solution offered by “Datos IO”. It reduces your storage footprint by 3x while also providing failure resiliency and cluster consistency.