why the rpc timeout occur in cassandra - cassandra

I tried using cqlsh -3 version on my keyspace and used select query on a column family.
It's return data in some causes and throws RPC time out in some other causes,I don't know the exact root cause.
I used select query with single where condition
select * FROM date where date='2013-10-11 00:00:00+0000';
In this date column has secondary index with datatype text in UTF8 format
Request did not complete within rpc_timeout.
I checked with cassandra log.it throws
ERROR [ReadStage:117] 2013-12-03 19:21:46,813 CassandraDaemon.java (line 192) Exception in thread Thread[ReadStage:117,5,main]
at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:119)
at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:60)
at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
at org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:132)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1390)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1213)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1125)
at org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:191)
at org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:109)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1499)
at org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:82)
at org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:548)
at org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1487)
at org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:44)
at org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1055)
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1547)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
why this happening?
I am checking in my local with single seed?
Update 1:
my date table Structure
CREATE TABLE date (
key text PRIMARY KEY,
date text,
date_id text,
day bigint,
day_name text
) WITH COMPACT STORAGE AND
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'min_sstable_size': '52428800', 'class': 'SizeTieredCompactionStrategy'} AND
compression={'chunk_length_kb': '64', 'sstable_compression': 'SnappyCompressor'};
I checked with cassandra log,Its shows
ERROR [ReadStage:94] 2013-12-03 22:07:17,116 CassandraDaemon.java (line 192) Exception in thread Thread[ReadStage:94,5,main]
java.lang.AssertionError: DecoratedKey(-8665312888645846270,.......................<!--some bytes of numbers------->
/var/lib/cassandra/data/keyspace/columnfamily/keyspace-columnfamily-ic-1-Data.db
at org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:119)
at org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNamesIterator.java:60)
at org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
at org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:132)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1390)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1213)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1125)
at org.apache.cassandra.db.Table.getRow(Table.java:347)
at org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1033)
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1547)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
currently i am using cassandra-1.2.6.
I check with this link,is this cassandra issue?
https://issues.apache.org/jira/browse/CASSANDRA-4687

If your query is expensive this can lead to rpc-timeouts. There's a bunch of SO questions along this line for different types of query, e.g., Fetching all the records for a partitionID in cassandra gives RPC timeout, RPC timeout in cqlsh - Cassandra (select count(*) queries). However, your question relates specifically to secondary indices.
Querying on secondary indices should be avoided if the number of unique indexed entries is high, as they are much more expensive than querying by key (i suspect this is the case if you are using dates). Perhaps there is a better way of modelling your data? (If you add the data model to your question I can try to elaborate on this answer.)

Related

JanusGraph query failure due to Cassandra backend tombstone exception

I have raised a GitHub issue regarding this as well. Pasting the same below.
JanusGraph version - janusgraph-0.3.1
Cassandra - cassandra:3.11.4
When we run JanusGraph with the Cassandra backend, after a period of time, the JanusGraph starts throwing the below errors and goes in to an unusable state.
JanusGraph Logs:
466489 [gremlin-server-exec-6] INFO
org.janusgraph.diskstorage.util.BackendOperation - Temporary exception
during backend operation [EdgeStoreKeys].
Attempting backoff retry.
org.janusgraph.diskstorage.TemporaryBackendException:
Temporary failure in storage backend at io.vavr.API$Match$Case0.apply(API.java:3174)
at
io.vavr.API$Match.of(API.java:3137) at
org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.lambda$static$0 (CQLKeyColumnValueStore.java:123)
at io.vavr.control.Try.getOrElseThrow(Try.java:671) at
org.janusgraph.diskstorage.cql.CQLKeyColumnValueStore.getKeys (CQLKeyColumnValueStore.java:405)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException:
Cassandra failure during read query at consistency QUORUM (1 responses
were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:130)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:30)
Cassandra Logs:
WARN [ReadStage-2] 2019-07-19 11:40:02,980 ReadCommand.java:569 - Read
74 live rows and 100001 tombstone cells for query SELECT * FROM
janusgraph.edgestore WHERE column1 >= 02 AND column1 <= 03 LIMIT 100
(see tombstone_warn_threshold)
ERROR [ReadStage-2] 2019-07-19
11:40:02,980 StorageProxy.java:1896 - Scanned over 100001 tombstones
during query 'SELECT * FROM janusgraph.edgestore WHERE column1 >= 02
AND column1 <= 03 LIMIT 100' (last scanned row partion key was
((00000000002b9d88), 02)); query aborted
Related Question:
Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
Questions:
1) Is Edge updates are stored as a new item causing tombstones ?. (since janus is a fork of titan).
How to increment Number of Visit count in Titan graph database Edge Label?
https://github.com/JanusGraph/janusgraph/issues/934
2) What is the right approach towards this. ?
Any solution/indications would be really helpful.
[Update]
1) Update to the edges didn't cause tombstones in the JanusGraph.
2) Solutions:
- As per the answer, reduce the gc_grace_seconds to a lower value based on the deletions of edge/vertex.
- Also can consider tuning the "tombstone_failure_threshold" in cassandra.yaml based on the needs.
For Cassandra, a tombstone is a flag that indicates that a record should be deleted, this can be occur after a delete operation was explicitly requested, or once that the Time To Live (TTL) period expired. A record with a tombstone will persist for the time defined in the gc_grace_seconds after the delete operation was executed, by default it is 10 days.
Usually running nodetool repair janusgraph edgestore (based on the error log provided) should be able to fix the issue. If you are still getting the error, you may need to decrease the gc_grace_seconds value of your table, as explained here.
For more information regarding tombstones:
https://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html
Tombstone vs nodetool and repair

Cassandra frequently crashed when working with WSO BAM 2.5.0

we are using Cassandra 1.2.9 + BAM 2.5 for API analysis.
We have scheduled a job to do cassandra data purge. This data purge job is divived into three steps.
The 1st step is to query the original column family and then insert them into the temporary columnFamily_purge.
The 2nd step is to delete from the orinal column family by adding tombstone,and insert the data from columnFamily_purge into the original column family.
The 3rd step is to drop the temporary columnFamily_purge
The 1st works well, but the 2nd step frequently crashes the cassandra servers during Hadoop map tasks,which makes Cassandra unavailable.The exception stacktrack is as follows:
2016-08-23 10:27:43,718 INFO org.apache.hadoop.io.nativeio.NativeIO: Got UserName hadoop for UID 47338 from the native implementation
2016-08-23 10:27:43,720 WARN org.apache.hadoop.mapred.Child: Error running child
me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
at me.prettyprint.cassandra.connection.HConnectionManager.getClientFromLBPolicy(HConnectionManager.java:390)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:244)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.deleteRow(AbstractColumnFamilyTemplate.java:173)
at org.wso2.carbon.bam.cassandra.data.archive.mapred.CassandraMapReduceRowDeletion$RowKeyMapper.map(CassandraMapReduceRowDeletion.java:246)
at org.wso2.carbon.bam.cassandra.data.archive.mapred.CassandraMapReduceRowDeletion$RowKeyMapper.map(CassandraMapReduceRowDeletion.java:139)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Could someone help on this what may lead to this problem? Thanks!
This can happen due to 3 reasons.
1) Cassandra servers are down. I don't thing this is the case in your setup.
2) Network issues
3) The load is higher than what cluster can handle.
How do you delete data? Using a hive script?
After I increase the number of open files and max thread number,the problem is gone.

Cassandra throwing NoHostAvailableException after 5 minutes of high IOPS run

I'm using datastax cassandra 2.1 driver and performing read/write operations at the rate of ~8000 IOPS. I've used pooling options to configure my session and am using separate session for read and write each of which connect to a different node in the cluster as contact point.
This works fine for say 5 mins but after that I get a lot of exceptions like :
Failed with: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.0.1.123:9042 (com.datastax.driver.core.TransportException: [/10.0.1.123:9042] Connection has been closed), /10.0.1.56:9042 (com.datastax.driver.core.exceptions.DriverException: Timeout while trying to acquire available connection (you may want to increase the driver number of per-host connections)))
Can anyone help me out here on what could be the problem?
The exception asks me to increase number of connections per host but how high a value can I set for this parameter ?
Also I'm not able to set CoreConnectionsPerHost beyond 2 as it throws me exception saying 2 is the max.
This is how I'm creating each read / write session.
PoolingOptions poolingOpts = new PoolingOptions();
poolingOpts.setCoreConnectionsPerHost(HostDistance.REMOTE, 2);
poolingOpts.setMaxConnectionsPerHost(HostDistance.REMOTE, 200);
poolingOpts.setMaxSimultaneousRequestsPerConnectionThreshold(HostDistance.REMOTE, 128);
poolingOpts.setMinSimultaneousRequestsPerConnectionThreshold(HostDistance.REMOTE, 2);
cluster = Cluster
.builder()
.withPoolingOptions( poolingOpts )
.addContactPoint(ip)
.withRetryPolicy( DowngradingConsistencyRetryPolicy.INSTANCE )
.withReconnectionPolicy( new ConstantReconnectionPolicy( 100L ) ).build();
Session s = cluster.connect(keySpace);
Your problem might not actually be in your code or the way you are connecting. If you say the problem is happening after a few minutes then it could simply be that your cluster is becoming overloaded trying to process the ingestion of data and cannot keep up. The typical sign of this is when you start seeing JVM garbage collection "GC" messages in the cassandra system.log file, too many small ones batched together of large ones on their own can mean that incoming clients are not responded to causing this kind of scenario. Verify that you do not have too many of these event showing up in your logs first before you start to look at your code. Here's a good example of a large GC event:
INFO [ScheduledTasks:1] 2014-05-15 23:19:49,678 GCInspector.java (line 116) GC for ConcurrentMarkSweep: 2896 ms for 2 collections, 310563800 used; max is 8375238656
When connecting to a cluster there are some recommendations, one of which is only have one Cluster object per real cluster. As per the article I've linked below (apologies if you already studied this):
Use one cluster instance per (physical) cluster (per application lifetime)
Use at most one session instance per keyspace, or use a single Session and explicitly specify the keyspace in your queries
If you execute a statement more than once, consider using a prepared statement
You can reduce the number of network roundtrips and also have atomic operations by using batches
http://www.datastax.com/documentation/developer/java-driver/2.1/java-driver/fourSimpleRules.html
As you are doing a high number of reads I'd most definitely recommend using setFetchSize also if its applicable to your code
http://www.datastax.com/documentation/developer/java-driver/2.1/common/drivers/reference/cqlStatements.html
http://www.datastax.com/documentation/developer/java-driver/2.1/java-driver/reference/queryBuilderOverview.html
For reference heres the connection options in case you find it useful
http://www.datastax.com/documentation/developer/java-driver/2.1/common/drivers/reference/connectionsOptions_c.html
Hope this helps.

Cassandra failed building secondary index upon restart

Cassandra was killed possibly due to low memory on the server. Upon restart, Cassandra failed building an AsciiType secondary index with a java.lang.ClassCastException. Here is the cassandra log output:
INFO 16:31:37,109 Creating new index : ColumnDefinition {
name=666f6c6c6f7754797065,
validator=org.apache.cassandra.db.marshal.AsciiType,
index_type=KEYS,
index_name='mySecondaryIndexField'
}
INFO 16:31:37,115 reading saved cache /var/lib/cassandra/saved_caches/MyProject-MyCF.mySecondaryIndexField-KeyCache
INFO 16:31:37,117 Opening /var/lib/cassandra/data/MyProject/MyCF/MyProject-MyCF.mySecondaryIndexField-hd-1 (399 bytes)
ERROR 16:31:37,121 Exception in thread Thread[SSTableBatchOpen:1,5,main]
**java.lang.ClassCastException: [B cannot be cast to java.nio.ByteBuffer**
at org.apache.cassandra.db.marshal.AsciiType.compare(AsciiType.java:28)
at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:45)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:89)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:38)
at java.util.TreeMap.getEntry(TreeMap.java:345)
at java.util.TreeMap.containsKey(TreeMap.java:226)
at java.util.TreeSet.contains(TreeSet.java:234)
at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:396)
at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:187)
at org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:225)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Queries with this secondary index returned only a small portion of result set as a result. Cassnadra was then restarted again for a second time, this exception is not thrown and secondary index was rebuilt correctly and all queries have recovered and returned the results as expected.
There are only 2 possible String values for my secondary index field "mySecondaryIndexField".
Here is also the configuration for my Column Family:
Column Type - Standard
Comparator Type - org.apache.cassandra.db.marshal.AsciiType
Read Repair Chance - 1
Index Options - name: mySecondaryIndexField
validation_class: org.apache.cassandra.db.marshal.AsciiType
index_type: 0
index_name: mySecondaryIndexField
index_options:
Gc Grace Seconds - 864000
Default Validation Class - org.apache.cassandra.db.marshal.BytesType
Id - 1023
Min Compaction Threshold - 4
Max Compaction Threshold - 32
Replicate On Write - 1
Key Validation Class - org.apache.cassandra.db.marshal.BytesType
Compaction Strategy - org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
Compaction Strategy Options - None
Sstable Compression -org.apache.cassandra.io.compress.SnappyCompressor
Caching - KEYS_ONLY
Does anyone have run into similar problems? Cassandra version is 1.1.1.

Cassandra 1.1.1 crashes while inserting heavy data using Hector 1.0.5

I am using Cassandra 1.1.1 and Using Hector 1.0.5, am trying to insert data (heavy volume) in to a column family. During execution of my program, the cassandra server crashes and displays the Out-of-memory error. After that I am left with no option than quitting the server. This gets repeated for one column family where I am trying to store html file(s) content and I never get a chance to complete it. The html file contents varies from 225 KB data to 700 KB data for one row and I am trying to insert almost 1000 records.
In the program it throws the below
Exception in thread "main" me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
at me.prettyprint.cassandra.connection.HConnectionManager.getClientFromLBPolicy(HConnectionManager.java:393)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:249)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at com.epocrates.soa.rx.util.DiseaseImporter.insertDisease(DiseaseImporter.java:207)
at com.epocrates.soa.rx.util.DiseaseImporter.batchProcess(DiseaseImporter.java:81)
at com.epocrates.soa.rx.
util.DiseaseImporter.main(DiseaseImporter.java:37)
In System.log, I find the below
java.io.IOError: java.io.IOException: Map failed
at org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:127)
at org.apache.cassandra.db.commitlog.CommitLogSegment.freshSegment(CommitLogSegment.java:80)
at org.apache.cassandra.db.commitlog.CommitLogAllocator.createFreshSegment(CommitLogAllocator.java:244)
at org.apache.cassandra.db.commitlog.CommitLogAllocator.access$500(CommitLogAllocator.java:49)
at org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:104)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
at org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:119)
... 6 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:755)
... 7 more
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut down
at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:60)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
at org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:457)
at org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:314)
at org.apache.cassandra.service.StorageProxy$2.apply(StorageProxy.java:119)
at org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:260)
at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:193)
at org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:637)
at org.apache.cassandra.thrift.CassandraServer.internal_batch_mutate(CassandraServer.java:587)
at org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:595)
at org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3112)
at org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3100)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:186)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
This means that you've run out of address space to map commitlog segments into.
Best solution: upgrade to a 64bit JVM.
Worse solution: in cassandra.yaml, set commitlog_segment_size_in_mb and commitlog_total_space_in_mb both to 16.
This isn't the first time this has come up; I've opened https://issues.apache.org/jira/browse/CASSANDRA-4422 to improve the defaults.

Resources