DataStax Enterprise: Spark Cassandra Batch Size - cassandra

I set the parameter spark.cassandra.output.batch.size.rows in my SparkConf as following:
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host", "host")
.set("spark.cassandra.auth.username", "cassandra")
.set("spark.cassandra.auth.password", "cassandra")
.set("spark.cassandra.output.batch.size.rows", "5120")
.set("spark.cassandra.output.concurrent.writes", "10")
but when i perform
saveToCassandra("data","ten_days")
I continue to see warning in my system.log
NFO [FlushWriter:7] 2014-11-20 11:11:16,498 Memtable.java (line 395) Completed flushing /var/lib/cassandra/data/system/hints/system-hints-jb-76-Data.db (5747287 bytes) for commitlog position ReplayPosition(segmentId=1416480663951, position=44882909)
INFO [FlushWriter:7] 2014-11-20 11:11:16,499 Memtable.java (line 355) Writing Memtable-ten_days#1656582530(32979978/329799780 serialized/live bytes, 551793 ops)
WARN [Native-Transport-Requests:761] 2014-11-20 11:11:16,499 BatchStatement.java (line 226) Batch of prepared statements for [data.ten_days] is of size 36825, exceeding specified threshold of 5120 by 31705.
WARN [Native-Transport-Requests:777] 2014-11-20 11:11:16,500 BatchStatement.java (line 226) Batch of prepared statements for [data.ten_days] is of size 36813, exceeding specified threshold of 5120 by 31693.
WARN [Native-Transport-Requests:822] 2014-11-20 11:11:16,501 BatchStatement.java (line 226) Batch of prepared statements for [data.ten_days] is of size 36823, exceeding specified threshold of 5120 by 31703.
WARN [Native-Transport-Requests:835] 2014-11-20 11:11:16,500 BatchStatement.java (line 226) Batch of prepared statements for [data.ten_days] is of size 36817, exceeding specified threshold of 5120 by 31697.
WARN [Native-Transport-Requests:781] 2014-11-20 11:11:16,501 BatchStatement.java (line 226) Batch of prepared statements for [data.ten_days] is of size 36817, exceeding specified threshold of 5120 by 31697.
WARN [Native-Transport-Requests:755] 2014-11-20 11:11:16,501 BatchStatement.java (line 226) Batch of prepared statements for [data.ten_days] is of size 36822, exceeding specified threshold of 5120 by 31702.
I know that are only warnings, but I would like to understand why my settings aren't working as expected. Then I can see a lot of hints in my cluster. Could the batch size affects the number of hints in the cluster ?
Thanks

You have set batch size rows instead of batch size bytes. This means the connector is limiting on the amount of rows rather than the memory size of the batch.
spark.cassandra.output.batch.size.rows: number of rows per single
batch; default is 'auto' which means the connector will adjust the
number of rows based on the amount of data in each row
spark.cassandra.output.batch.size.bytes: maximum total size of the
batch in bytes; defaults to 64 kB.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md
On a more important note you are most likely going to be better off with a larger batch size (64kb) and changing the warn limit in the cassandra.yaml file.
Edit:
Recently we've seen that larger batches can cause instability with certain C* configurations so lower the value if the system becomes unstable.

Related

spark task size too big

I'm using LBFGS logistic regression to classify examples into one of the two categories. When, I'm training the model, I get many warnings of this kind -
WARN scheduler.TaskSetManager: Stage 132 contains a task of very large size (109 KB). The maximum recommended task size is 100 KB.
WARN scheduler.TaskSetManager: Stage 134 contains a task of very large size (102 KB). The maximum recommended task size is 100 KB.
WARN scheduler.TaskSetManager: Stage 136 contains a task of very large size (109 KB). The maximum recommended task size is 100 KB.
I have about 94 features and about 7500 training examples. Is there some other argument I should pass in order to break up the task size into smaller chunks?
Also, is this just a warning that, in the worst case can be ignored? Or does it hamper the training?
I'm calling my trainer this way --
val lr_lbfgs = new LogisticRegressionWithLBFGS().setNumClasses(2)
lr_lbfgs.optimizer.setRegParam(reg).setNumIterations(numIterations)
val model = lr_lbfgs.run(trainingData)
Also, my driver and executor memory is 20G which I set as arguments to spark-submit
Spark sends a copy of every variable and method that needs to be visible to the executors; this warning means that, in total, these objects exceed 100 KB. You can safely ignore this warning if it doesn't impact performance noticeably, or you could consider marking some variables as broadcast variables.

Cassandra killed by O/S without error in logs

One node of my cassandra cluster breaks very often like every day. It mostly breaks in the morning most probably because of memory consumption by other jobs. But I don't see any error msg in the logs. Following are the log messages when it breaks:
INFO [MemtableFlushWriter:313] 2016-04-07 07:19:19,629 Memtable.java:385 - Completed flushing /data/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/system-compaction_history-ka-6666-Data.db (6945 bytes) for commitlog position ReplayPosition(segmentId=1459914506391, position=14934142)
INFO [MemtableFlushWriter:313] 2016-04-07 07:19:19,631 Memtable.java:346 - Writing Memtable-sstable_activity#823770964(21691 serialized bytes, 7605 ops, 0%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:313] 2016-04-07 07:19:19,650 Memtable.java:385 - Completed flushing /data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7558-Data.db (13689 bytes) for commitlog position ReplayPosition(segmentId=1459914506391, position=14934142)
INFO [CompactionExecutor:1176] 2016-04-07 07:19:19,651 CompactionTask.java:140 - Compacting [SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7557-Data.db'), SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7556-Data.db'), SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7555-Data.db'), SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7558-Data.db')]
INFO [CompactionExecutor:1176] 2016-04-07 07:19:19,696 CompactionTask.java:270 - Compacted 4 sstables to [/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7559,]. 46,456 bytes to 9,197 (~19% of original) in 45ms = 0.194910MB/s. 1,498 total partitions merged to 216. Partition merge counts were {1:675, 2:33, 3:11, 4:181, }
INFO [MemtableFlushWriter:312] 2016-04-07 07:19:19,717 Memtable.java:385 - Completed flushing /data/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-8014-Data.db (849738 bytes) for commitlog position ReplayPosition(segmentId=1459914506391, position=14934142)
I know that the OS kills it but how to find the root cause of the problem and if any config changes can be done for cassandra?

Cassandra: Long Par New GC Pauses when Bootstrapping new nodes to cluster

I've seen an issue that happens fairly often when bootstrapping new nodes to a Datastax Enterprise Cassandra cluster (ver: 2.0.10.71)
When starting the new node to be bootstrapped, the bootstrap process starts to stream data from other nodes in the cluster. After a short period of time (usually a min or less) - other nodes in the cluster show high Par New GC pause times and then the nodes drop off from the cluster, failing the stream session.
INFO [main] 2015-04-27 16:59:58,644 StreamResultFuture.java (line 91) [Stream #d42dfef0-ecfe-11e4-8099-5be75b0950b8] Beginning stream session with /10.1.214.186
INFO [GossipTasks:1] 2015-04-27 17:01:06,342 Gossiper.java (line 890) InetAddress /10.1.214.186 is now DOWN
INFO [HANDSHAKE-/10.1.214.186] 2015-04-27 17:01:21,400 OutboundTcpConnection.java (line 386) Handshaking version with /10.1.214.186
INFO [RequestResponseStage:11] 2015-04-27 17:01:23,439 Gossiper.java (line 876) InetAddress /10.1.214.186 is now UP
Then on the other node:
10.1.214.186 ERROR [STREAM-IN-/10.1.212.233] 2015-04-27 17:02:07,007 StreamSession.java (line 454) [Stream #d42dfef0-ecfe-11e4-8099-5be75b0950b8] Streaming error occurred
Also see things in the logs:
10.1.219.232 INFO [ScheduledTasks:1] 2015-04-27 18:20:19,987 GCInspector.java (line 116) GC for ParNew: 118272 ms for 2 collections, 980357368 used; max is 12801015808
10.1.221.146 INFO [ScheduledTasks:1] 2015-04-27 18:20:29,468 GCInspector.java (line 116) GC for ParNew: 154911 ms for 1 collections, 1287263224 used; max is 12801015808`
It seems that it happens on different nodes each time we try to bootstrap a new node.
I've found this related ticket. https://issues.apache.org/jira/browse/CASSANDRA-6653
My only guess is that when the new node comes up a lot of compactions are firing off and that might be causing the GC pause times, I had considered setting concurrent_compactors = 1/2 my total CPU
Anyone have an idea?
Edit: More details around GC settings Using i2.2xlarge nodes on EC2:
MAX_HEAP_SIZE="12G"
HEAP_NEWSIZE="800M"
Also
JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC"
JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC"
JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled"
JVM_OPTS="$JVM_OPTS -XX:SurvivorRatio=8"
JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=1"
JVM_OPTS="$JVM_OPTS -XX:CMSInitiatingOccupancyFraction=75"
JVM_OPTS="$JVM_OPTS -XX:+UseCMSInitiatingOccupancyOnly"
JVM_OPTS="$JVM_OPTS -XX:+UseTLAB"
With the help from the DSE crew - the following settings helped us.
With an i2.2xlarge node (8 cpu, 60G of ram, local SSD only)
Increasing Heap New Size to 512M * num CPU (in our case 4G)
Setting memtable_flush_writers = 8
Setting concurrent_compactors = total CPU / 2 (in our case 4)
Making these changes no longer seeing ParNew GC times exceeding 1sec on bootstrap (previously we were seeing 50-100 SECOND Gc times). FWIW We don't see any ParNew GC times during normal operation - only bootstrap.

Cassandra CPU Load (too much)

Using top
8260 root 20 0 5163m 4.7g **133m** S 144.6 30.5 2496:46 java
Most of the time %CPU is >170.
I am trying to identify the issue. I think GC or flushing is too blame.
S0 S1 E O P YGC YGCT FGC FGCT GCT LGCC GCC
0.00 16.73 74.74 29.33 59.91 27819 407.186 206 10.729 417.914 Allocation Failure No GC
0.00 16.73 99.57 29.33 59.91 27820 407.186 206 10.729 417.914 Allocation Failure Allocation Failure
Also from Cassandra logs, it says Replaying position with the same segment ID and memtable is flushing too often.
INFO [SlabPoolCleaner] 2015-01-20 13:55:48,515 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 112838010 (11%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1587] 2015-01-20 13:55:48,516 Memtable.java:325 - Writing Memtable-bid_list#2003093066(23761503 serialized bytes, 211002 ops, 11%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1587] 2015-01-20 13:55:49,251 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3965-Data.db (4144688 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25289038)
INFO [SlabPoolCleaner] 2015-01-20 13:56:23,429 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 104056985 (10%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1589] 2015-01-20 13:56:23,429 Memtable.java:325 - Writing Memtable-bid_list#1124683519(21909522 serialized bytes, 194778 ops, 10%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1589] 2015-01-20 13:56:24,130 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3967-Data.db (3830733 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25350445)
INFO [SlabPoolCleaner] 2015-01-20 13:56:55,493 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 95807739 (9%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1590] 2015-01-20 13:56:55,494 Memtable.java:325 - Writing Memtable-bid_list#473510037(20170635 serialized bytes, 179514 ops, 9%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1590] 2015-01-20 13:56:56,151 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3968-Data.db (3531752 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25373052)
Any help or suggestion would be great. I have also disabled durable write false for the KeySpace. Thanks
Just found out after restarting all the nodes, YGC on one of the server is kicking in even if nothing is happening. Stopped the dumping of data etc.
What type of compaction do you use? Size tiered or Leveled?
If you are using leveled compaction, can you switch off over to Size tiered as you seem to have too many compactions. Increasing the sstable size for leveled compaction may also help.
sstable_size_in_mb (Default: 160MB)
The target size for SSTables that use the leveled compaction strategy. Although SSTable sizes
should be less or equal to sstable_size_in_mb, it is possible to have
a larger SSTable during compaction. This occurs when data for a given
partition key is exceptionally large. The data is not split into two
SSTables.
(http://www.datastax.com/documentation/cassandra/1.2/cassandra/reference/referenceTableAttributes.html#reference_ds_zyq_zmz_1k__sstable_size_in_mb)
If you are using size tiered compaction, increase the number of SS Tables before you see a minor compaction. This is set when the table is created, so you can change it using ALTER command. Example below:
ALTER TABLE users WITH
compaction_strategy_class='SizeTieredCompactionStrategy' AND
min_compaction_threshold = 6;
Compact after 6 SSTables are created

Cassandra repair failing because of GC

We have a 9 node cluster and are running repairs every night as recommended (1 node each night).
We recently started having problems during the repairs, some nodes would die OutOfMemory because the GC would not collect fast enough. In the beginning it was a promotion issue (as shown by the detailed GC logs).
So we assumed that the CMS was not triggered fast enough and prevented ParNew from promoting surviging objects. When then lowered XX:CMSInitiatingOccupancyFraction from 75 to 50 to force the old GC to trigger faster.
It seemed to work, but yesterday two nodes dies because the GC couldn't cope with the allocation speed, producing this kind of logs :
INFO [ScheduledTasks:1] 2013-09-27 23:36:38,111 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 21756 ms for 1 collections, 8003258240 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:36:38,878 GCInspector.java (line 142) Heap is 0.9746211436302873 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:36:57,018 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 17265 ms for 1 collections, 6587223560 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:36:57,243 GCInspector.java (line 142) Heap is 0.802179208376459 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:37:18,180 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 18437 ms for 1 collections, 6961687392 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:37:18,785 GCInspector.java (line 142) Heap is 0.8477806818323523 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:37:40,416 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 19032 ms for 1 collections, 7338693168 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:37:40,456 GCInspector.java (line 142) Heap is 0.893691708259552 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:38:02,994 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 18853 ms for 1 collections, 7570047632 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:38:03,008 GCInspector.java (line 142) Heap is 0.9218656026318086 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:38:26,110 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 19564 ms for 1 collections, 7714594464 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:38:26,132 GCInspector.java (line 142) Heap is 0.9394682332713986 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:38:49,733 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 20388 ms for 1 collections, 7843428464 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:38:49,748 GCInspector.java (line 142) Heap is 0.9551573859456055 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:39:14,564 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 20956 ms for 1 collections, 7934286376 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:39:14,578 GCInspector.java (line 142) Heap is 0.9662218848591505 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:39:40,186 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 22440 ms for 1 collections, 8008275464 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:39:40,915 GCInspector.java (line 142) Heap is 0.9752321313612954 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:40:01,836 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 19911 ms for 1 collections, 8022614576 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:40:06,032 GCInspector.java (line 142) Heap is 0.976978320390438 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:40:27,407 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 22590 ms for 1 collections, 8058828880 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:40:31,091 GCInspector.java (line 142) Heap is 0.9813884275395302 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [GossipTasks:1] 2013-09-27 23:40:53,798 Gossiper.java (line 799) InetAddress /<datacenter02>.<node2> is now DOWN
INFO [GossipTasks:1] 2013-09-27 23:40:53,846 Gossiper.java (line 799) InetAddress /<datacenter01>.<node3> is now DOWN
INFO [GossipStage:1] 2013-09-27 23:40:53,857 Gossiper.java (line 785) InetAddress /<datacenter01>.<node3> is now UP
INFO [GossipStage:1] 2013-09-27 23:40:53,909 Gossiper.java (line 785) InetAddress /<datacenter02>.<node2> is now UP
This time the heap grows and the GC run for 10-20 seconds without reducing the heap size, causing nodes to think that each other is down because they are busy GCing. In the end the nodes died of OOM.
We then tried to update to the latest version of Cassandra (1.2.8 -> 1.2.10) even though no fixed bug in these versions suggested any improvement for our problem. We then reran a repair during last night, but even though no nodes crashed, they failed to repair some ranges because of GCs of this kind :
INFO [ScheduledTasks:1] 2013-09-29 04:45:05,467 GCInspector.java (line 119) GC for ParNew: 22875 ms for 2 collections, 4128819328 used; max is 8211660800
INFO [ScheduledTasks:1] 2013-09-29 04:53:24,597 GCInspector.java (line 119) GC for ParNew: 133643 ms for 2 collections, 3102634584 used; max is 8211660800
This time it's ParNew taking ridiculous amounts of time.
I first thought of a load issue, but it continued to happen during the w-e when only the repair is happening.
Any help would be appreciated to diagnose / fix our issue.
The StatusLogger info doesn't show anything unusual except for GC taking a while. (Are you running on VMs? That tends to reduce GC performance: http://www.slideshare.net/eonnen/high-performance-network-programming-on-the-jvm-oscon-2012/62.)
My guess: repair adds enough load to the system that it falls behind processing requests and spends too much memory buffering them. You can verify this by looking for "dropped" messages in the log. By default it will buffer 10s worth of requests; to reduce this, lower the appropriate rpc timeouts in cassandra.yaml.
Try using the G1 GC instead of the CMS. G1 does not pause like that:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html

Resources