Explanation required for a statement in Cassandra documentation - cassandra

I was going through the DataStax documentation and found an interesting statement.
It claimed "Insert-heavy workloads are CPU-bound in Cassandra before becoming memory-bound".
Can someone explain about how this claim is made? and what might be causing this behavior of Cassandra??
Thanks.

For different workloads, Cassandra clusters can be CPU, memory, I/O or (occasionally) network bound. The claim in the documentation is, if you start a new cluster and make lots of inserts, the cluster will initially be CPU bound but after a while it becomes bottlenecked on memory.
To process an insert, Cassandra needs to deserialize the messages from the clients, find which nodes should store the data and send messages to those nodes. Those nodes then store the data in an in memory data structure called a Memtable.
This is almost always CPU bound initially. However, as more data is inserted, the memtables grow large and are flushed to disk and new (empty) memtables are created. The flushed memtables are stored in files known as SSTables. There is an ongoing background process called compaction that merges SSTables together into progressively larger and larger files.
There are a few reasons why more memory will help at this stage:
If Cassandra is low on heap space, it will flush memtables when they are smaller. This creates smaller SSTables so more work to compact them.
If the workload involves overwrites or inserts to the same row at different times, it is much cheaper to do this if the row is still in a current memtable. If not, the overwrite and new column is stored in a new memtable, then flushed and merged during compaction. So again, less memory means more compaction work.
Your OS uses memory to buffer reads and writes during compaction. If the OS can't then there will be extra I/O, slowing down memtable flushing and compaction.
Inserts into Cassandra consume lots of Java objects so create work for the garbage collector. If the heap is too small inserts may be paused while GC runs to make some free heap. (On the other hand, if the heap is too large, inserts may be paused for a few seconds during stop-the-world GC.)
So inserts may become memory bound, but they could also become I/O bound. If there isn't enough I/O to flush memtables then inserts will become blocked once the memtable flush queue is full. So I think the claim could be a bit more accurate:
Insert-heavy workloads are CPU-bound in Cassandra before becoming memory or I/O bound.

Related

how spark handles out of memory error when cached( MEMORY_ONLY persistence) data does not fit in memory?

I'm new to the spark and i am not able to find clear answer that What happens when a cached data does not fit in memory?
many places i found that If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed.
for example:lets say 500 partition is created and say 200 partition didn't cached then again we have to re-compute the remaining 200 partition by re-evaluating the RDD.
If that is the case then OOM error should never occur but it does.What is the reason?
Detailed explanation is highly appreciated.Thanks in advance
There are different ways you can persist in your dataframe in spark.
1)Persist (MEMORY_ONLY)
when you persist data frame with MEMORY_ONLY it will be cached in spark.cached.memory section as deserialized Java objects. If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed. This is the default level and can some times cause OOM when the RDD is too big and cannot fit in memory(it can also occur after recalculation effort).
To answer your question
If that is the case then OOM error should never occur but it does.What is the reason?
even after recalculation you need to fit those rdd in memory. if there no space available then GC will try to clean some part and try to allocate it.if not successfully then it will fail with OOM
2)Persist (MEMORY_AND_DISK)
when you persist data frame with MEMORY_AND_DISK it will be cached in spark.cached.memory section as deserialized Java objects if memory is not available in heap then it will be spilled to disk. to tackle memory issues it will spill down some part of data or complete data to disk. (note: make sure to have enough disk space in nodes other no-disk space errors will popup)
3)Persist (MEMORY_ONLY_SER)
when you persist data frame with MEMORY_ONLY_SER it will be cached in spark.cached.memory section as serialized Java objects (one-byte array per partition). this is generally more space-efficient than MEMORY_ONLY but it is a cpu-intensive task because compression is involved (general suggestion here is to use Kyro for serialization) but this still faces OOM issues similar to MEMORY_ONLY.
4)Persist (MEMORY_AND_DISK_SER)
it is similar to MEMORY_ONLY_SER but one difference is when no heap space is available then it will spill RDD array to disk the same as (MEMORY_AND_DISK) ... we can use this option when you have a tight constraint on disk space and you want to reduce IO traffic.
5)Persist (DISK_ONLY)
In this case, heap memory is not used.RDD's are persisted to disk. make sure to have enough disk space and this option will have huge IO overhead. don't use this when you have dataframes that are repeatedly used.
6)Persist (MEMORY_ONLY_2 or MEMORY_AND_DISK_2)
These are similar to above mentioned MEMORY_ONLY and MEMORY_AND_DISK. the only difference is these options replicate each partition on two cluster nodes just to be on the safe side.. use these options when you are using spot instances.
7)Persist (OFF_HEAP)
Off heap memory generally contains thread stacks, spark container application code, network IO buffers, and other OS application buffers. even you can utilize this part of the memory from RAM for caching your RDD with the above option.

What is the purpose of Cassandra's commit log?

Please some one clarify for me to understand Commit Log and its use.
In Cassandra, while writing to Disk is the commit log the first entry point or MemTables.
If Memtables is what is getting flushed to disk, what is the use of Commit log, is the only purpose of commit log is to server sync issues if a data node is down?
You can think of the commit log as an optimization, but Cassandra would be unusably slow without it. When MemTables get written to disk we call them SSTables. SSTables are immutable, meaning once Cassandra writes them to disk it does not update them. So when a column changes Cassandra needs to write a new SSTable to disk. If Cassandra was writing these SSTables to disk on every update it would be completely IO bound and very slow.
So Cassandra uses a few tricks to get better performance. Instead of writing SSTables to disk on every column update, it keeps the updates in memory and flushes those changes to disk periodically to keep the IO to a reasonable level. But this leads to the obvious problem that if the machine goes down or Cassandra crashes you would lose data on that node. To avoid losing data, in addition to keeping recent changes in memory, Cassandra writes the changes to its CommitLog.
You may be asking why is writing to the CommitLog any better than just writing the SSTables. The CommitLog is optimized for writing. Unlike SSTables which store rows in sorted order, the CommitLog stores updates in the order which they were processed by Cassandra. The CommitLog also stores changes for all the column families in a single file so the disk doesn't need to do a bunch of seeks when it is receiving updates for multiple column families at the same time.
Basically writting the CommitLog to the disk is better because it has to write less data than writing SSTables does and it writes all that data to a single place on disk.
Cassandra keeps track of what data has been flushed to SSTables and is able to truncate the Commit log once all data older than a certain point has been written.
When Cassandra starts up it has to read the commit log back from that last known good point in time (the point at which we know all previous writes were written to an SSTable). It re-applies the changes in the commit log to its MemTables so it can get into the same state when it stopped. This process can be slow so if you are stopping a Cassandra node for maintenance it is a good idea to use nodetool drain before shutting it down which will flush everything in the MemTables to SSTables and make the amount of work on startup a lot smaller.
The write path in Cassandra works like this:
Cassandra Node ---->Commitlog-----------------> Memtable
| |
| |
|---> Periodically |---> Periodically
sync to disk flush to SSTable
Memtable and Commitlog are NOT written (kind of) in parallel. Write to Commitlog must be finished before starting to write to Memtable. Related source code stack is:
org.apache.cassandra.service.StorageProxy.mutateMV:mutation.apply->
org.apache.cassandra.db.Mutation.apply:Keyspace.open(keyspaceName).apply->
org.apache.cassandra.db.Keyspace.apply->
org.apache.cassandra.db.Keyspace.applyInternal{
Tracing.trace("Appending to commitlog");
commitLogPosition = CommitLog.instance.add(mutation)
...
Tracing.trace("Adding to {} memtable",...
...
upd.metadata().name(...);
...
cfs.apply(...);
...
}
The purpose of the Commitlog is to be able to recreate the Memtable after a node crashes or gets rebooted. This is important, since the Memtable only gets flushed to disk when it's 'full' - meaning the configured Memtable size is exceeded - or the flush is performed by nodetool or opscenter. So the data in Memtable is not persisted directly.
Having said that, a good thing before rebooting a node or container is to call nodetool flush to make sure your Memtables are fully persisted (flushed) to SSTables on disk. This also will reduce playback time of the Commitlog after the node or container comes up again.

CPU 100% due to thousands of Pending Compaction

Recently we inserted millions of records and deleted millions of records from a table, a table of size 10 GB was truncated.
We are running with 2 nodes with SizeTieredCompactionStrategy, currently CPU utilization is 100% and pending compaction is increasing , currently pending compaction is 293144
Any pointers to reduce CPU utilization and get the compaction done quickly.
reduce CPU utilization and get the compaction done quickly.
These two things are orthogonal. You can either accelerate the compaction (by using more resources) or limit the resources for the compactions so that your writes aren't affected but have it take longer.
If you have an ingest running against your cassandra cluster, I would try to ensure that it is not affected by your compactions. As long as the # of pending compactions is decreasing over time it's just a matter of time.
If you don't have reads or writes coming in (I.E. downtime or you're bootstrapping) it's okay to let compactions use up all your resources and finish fast.
How?
The levers are:
1) get/set compaction throughput (nodetool)-- only kicks in for the next available compaction. This is how fast the compaction will occur. Default is 16 mb/s if you have resources available, you can increase this to a larger number.
2) concurrent compactors -- there are 2 values you have to set in JMX. you can do this on the fly using jmxsh or jconsole, etc. This is the number of compactions you can run at a time (number of cores).
Monitoring
Watch nodetool compactionstats or OpsCenter (you can also chart pending compactions and set alerts) to find out the progress for the current compactions or nodetool comactionhistory for completed compactions.
Other things
a table of size 10 GB was truncated.
Truncates are free, no compaction needed.

Do we need to run manual compaction with Leveled compaction strategy and SIzeTiered compaction strategy

We have a couple of tables with Leveled compaction strategy and SizeTiered compaction strategy. How often do we need to run compaction? Thanks in advance
TL;DR
Compaction runs on its own (as long as you did not disable autocompaction in the yaml).
Compaction - what is it?
Per the cassandra write path, we flush memtables to disk periodically into SSTables (sorted string tables) which are immutable. When you update an existing cell, it eventually gets written in an sstable. Possibly a different one than the original record. When we read, sometimes C* has to scan across various sstables (with some optimizations, see bloom filters) to find the latest version of a cell. In Cassandra, last write wins.
Compaction takes sstables and compacts them together removing duplicate data, to optimize reads. This is an automatic operation, though you can tune compactions to run more or less often.
Some useful details on Compaction
Size tiered compaction is the default compaction strategy in cassandra, it looks for sstables that are the same size and compacts them together when it finds enough (4 by default). Size tiered is less IO intensive than leveled and will work better in general when you have smaller boxes and rotational drives.
Leveled compaction is optimized for reads, when you have read heavy workloads or tight read SLA's with lots of updates leveled may make sense. Leveled compaction is more IO and CPU intensive because you are spending more cycles optimizing for reads, but the reads themselves should be faster and hit fewer SStables. Keep an eye on io wait and on pending compactions in nodetool compaction stats when you first enable these or if your workload grows.
Compaction Tunables / Levers
multi threaded compaction - turn it off, the overhead is bigger than the benefit. To the point where it's been removed in C* 2.1.
concurrent compactors - now defaults to 2, used to default to number of cores which is a bad default. If you're on the 2.0 branch and not running the latest DSE check this default and consider decreasing it to 2. this is the number of simultaneous compaction tasks you can run (different column families).
Compaction throttling - a way of limiting the amount of resources that compactions take up. You can tune this on the fly with nodetool getcompactionthreshold and nodetool setcompactionthreshold. You want to tune this to a point where you are not accumulating pending tasks. 0 --> unlimited. Unlimited is, unintuitively, not usually the fastest setting as the system may get bogged down.

Should I stripe Cassandra commit log?

Is there any benefit in striping the Cassandra commit log? For example, create a RAID1 of multiple disks. If each commit log flush is large enough (larger than stripe size) will it take advantage of the multiple spindles?
I've never seen any Cassandra deployments limited by I/O writing the commit log. Much more I/O will be generated by flushing memtables, compaction and reads.
RAID1 is mirroring so would only increase reliability. This is unlikely to be worth it since the commitlog is replicated through normal Cassandra replication.
Striping with RAID0 might help write throughput for the commitlog but I doubt you'd notice any overall performance improvement.

Resources