error while installing cassandra in windows - cassandra

INFO 19:07:42,273 GC for ParNew: 2182 ms, 27013384 reclaimed leaving 215461536
used; max is 1171062784
INFO 19:07:44,382 Pool Name Active Pending
INFO 19:07:44,960 ReadStage 0 0
INFO 19:07:44,976 RequestResponseStage 0 0
INFO 19:07:44,976 ReadRepairStage 0 0
INFO 19:07:45,007 MutationStage 0 0
INFO 19:07:45,007 GossipStage 0 0
INFO 19:07:45,007 AntiEntropyStage 0 0
INFO 19:07:45,007 MigrationStage 0 0
INFO 19:07:45,007 StreamStage 0 0
INFO 19:07:45,022 MemtablePostFlusher 0 0
INFO 19:07:45,022 FlushWriter 0 0
INFO 19:07:45,022 MiscStage 0 0
INFO 19:07:45,022 FlushSorter 0 0
INFO 19:07:45,038 InternalResponseStage 0 0
INFO 19:07:45,038 HintedHandoff 0 0
INFO 19:07:45,085 CompactionManager n/a 0
INFO 19:07:45,101 MessagingService n/a 0,0
INFO 19:07:45,116 ColumnFamily Memtable ops,data Row cache siz
/cap Key cache size/cap
INFO 19:07:45,288 system.LocationInfo 0,0
0/0 1/1
INFO 19:07:45,304 system.HintsColumnFamily 0,0
0/0 0/1
INFO 19:07:45,319 system.Migrations 0,0
0/0 0/1
INFO 19:07:45,319 system.Schema 0,0
0/0 0/1
INFO 19:07:45,319 system.IndexInfo 0,0
0/0 0/1
After this my installation process does not proceed. It generally hangs showing:
Listening for thrift clients....

Thats exactly what you want to see if your cassandra node/cluster is up and running.

Schildmeijer is correct - this is a normal log output from running Cassandra after a successful installation.
If you are unsure, then try running the Cassandra CLI (see http://wiki.apache.org/cassandra/CassandraCli) and execute some commands to check that the server node responds.
Cassandra runs as a server process - you don't interact directly with it, only via the CLI or another client tool.

Related

Not marking nodes down due to local pause of 8478595263 > 5000000000

i have 3 node cassandra cluster in kubernetes.
Deployed cassandra using bitnami/cassandra helm chart.
getting error based on more number of request after sometime later
WARN [GossipTasks:1] 2020-01-09 11:39:33,070 FailureDetector.java:278 - Not marking nodes down due to local pause of 8206335128 > 5000000000
WARN [GossipTasks:1] 2020-01-09 11:39:42,238 FailureDetector.java:278 - Not marking nodes down due to local pause of 6668041401 > 5000000000
WARN [GossipTasks:1] 2020-01-09 11:40:03,341 FailureDetector.java:278 - Not marking nodes down due to local pause of 15041441083 > 5000000000
WARN [PERIODIC-COMMIT-LOG-SYNCER] 2020-01-09 11:41:55,606 NoSpamLogger.java:94 - Out of 1 commit log syncs over the past 0.00s with average duration of 11850.79ms, 1 have exceeded the configured commit interval by an average of 1850.79ms
WARN [GossipTasks:1] 2020-01-09 11:42:20,019 Gossiper.java:783 - Gossip stage has 1 pending tasks; skipping status check (no nodes will be marked down)
NFO [RequestResponseStage-1] 2020-01-09 11:45:36,329 Gossiper.java:1011 - InetAddress /100.96.7.7 is now UP
INFO [RequestResponseStage-1] 2020-01-09 11:45:36,330 Gossiper.java:1011 - InetAddress /100.96.7.7 is now UP
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,931 MessagingService.java:1236 - MUTATION messages were dropped in last 5000 ms: 0 internal and 45 cross node. Mean internal dropped latency: 0 ms and Mean cross-node dropped latency: 2874 ms
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,933 StatusLogger.java:47 - Pool Name Active Pending Completed Blocked All Time Blocked
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,949 StatusLogger.java:51 - MutationStage 0 0 226236 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,950 StatusLogger.java:51 - ViewMutationStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,950 StatusLogger.java:51 - ReadStage 0 0 244468 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,951 StatusLogger.java:51 - RequestResponseStage 0 0 341270 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,952 StatusLogger.java:51 - ReadRepairStage 0 0 5395 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,953 StatusLogger.java:51 - CounterMutationStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,958 StatusLogger.java:51 - MiscStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,959 StatusLogger.java:51 - CompactionExecutor 0 0 686641 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,960 StatusLogger.java:51 - MemtableReclaimMemory 0 0 689 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,962 StatusLogger.java:51 - PendingRangeCalculator 0 0 9 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,964 StatusLogger.java:51 - GossipStage 0 0 3093860 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,966 StatusLogger.java:51 - SecondaryIndexManagement 0 0 0 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,970 StatusLogger.java:51 - HintsDispatcher 0 0 10 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,973 StatusLogger.java:51 - MigrationStage 0 0 6 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,973 StatusLogger.java:51 - MemtablePostFlush 0 0 717 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,974 StatusLogger.java:51 - PerDiskMemtableFlushWriter_0 0 0 689 0 0
:
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,974 StatusLogger.java:51 - PerDiskMemtableFlushWriter_0 0 0 689 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,974 StatusLogger.java:51 - ValidationExecutor 0 0 0 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,975 StatusLogger.java:51 - Sampler 0 0 0 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,975 StatusLogger.java:51 - MemtableFlushWriter 0 0 689 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,976 StatusLogger.java:51 - InternalResponseStage 0 0 869 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,977 StatusLogger.java:51 - AntiEntropyStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2020-01-09 11:45:55,978 StatusLogger.java:51 - CacheCleanupExecutor 0 0 0 0 0
INFO
INFO [Service Thread] 2020-01-09 12:11:49,292 GCInspector.java:284 - ParNew GC in 659ms. CMS Old Gen: 2056877512 -> 2057740336; Par Eden Space: 671088640 -> 0; Par Survivor Space: 2636992 -> 6187520
Tried to solved based on some of the reference issue but not given for kubernetes
Cassandra Error message: Not marking nodes down due to local pause. Why?
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 245904 0 0
MiscStage 0 0 0 0 0
CompactionExecutor 0 0 696906 0 0
MutationStage 0 0 244820 0 0
MemtableReclaimMemory 0 0 697 0 0
PendingRangeCalculator 0 0 9 0 0
GossipStage 0 0 3138625 0 0
SecondaryIndexManagement 0 0 0 0 0
HintsDispatcher 0 0 10 0 0
RequestResponseStage 0 0 364305 0 0
Native-Transport-Requests 0 0 11089339 0 241
ReadRepairStage 0 0 5395 0 0
CounterMutationStage 0 0 0 0 0
MigrationStage 0 0 6 0 0
MemtablePostFlush 0 0 725 0 0
PerDiskMemtableFlushWriter_0 0 0 697 0 0
ValidationExecutor 0 0 0 0 0
Sampler 0 0 0 0 0
MemtableFlushWriter 0 0 697 0 0
InternalResponseStage 0 0 869 0 0
ViewMutationStage 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
CacheCleanupExecutor 0 0 0 0 0
Message type Dropped
READ 0
RANGE_SLICE 0
_TRACE 0
HINT 0
MUTATION 45
COUNTER_MUTATION 0
BATCH_STORE 0
BATCH_REMOVE 0
REQUEST_RESPONSE 0
PAGED_RANGE 0
READ_REPAIR 0
From the above "tpstats" metrics looks okay but we can see some mutation there so it indicates that you cluster is going overload. Some requests blocked there too. Commitlogs seem not accepting the write there. You should plan cluster expansion or start debugging why nodes are overloading.
Based on the log entries you posted above, the nodes are overloaded making them unresponsive. Mutations get dropped because the commitlog disks cannot keep up with the writes.
You will need to review the size of your cluster as you might need to add more nodes to increase capacity. Cheers!

Cassandra node stuck in Joining state

I'm trying to add a new node to an existing Cassandra 3.11.1.0 cluster with auto_bootstrap: true option. The new node Completed streaming the data from other nodes, the secondary index build and compact procedures for main table but after that it seems to be stuck in JOINING state. There are no errors/warnings in node's system.log - just INFO messages.
Also during secondary index build and compact procedures there was significant CPU load on node and now there is none. So it looks like the node is stuck during bootstrap and currently idle.
# nodetool status
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN XX.XX.XX.109 33.37 GiB 256 ? xxxx-9f1c79171069 rack1
UN XX.XX.XX.47 35.41 GiB 256 ? xxxx-42531b89d462 rack1
UJ XX.XX.XX.32 15.18 GiB 256 ? xxxx-f5838fa433e4 rack1
UN XX.XX.XX.98 20.65 GiB 256 ? xxxx-add6ed64bcc2 rack1
UN XX.XX.XX.21 33.02 GiB 256 ? xxxx-660149bc0070 rack1
UN XX.XX.XX.197 25.98 GiB 256 ? xxxx-703bd5a1f2d4 rack1
UN XX.XX.XX.151 21.9 GiB 256 ? xxxx-867cb3b8bfca rack1
nodetool compactionstats shows that there are some compactions pending but I've no idea if there is some activity or it just stuck:
# nodetool compactionstats
pending tasks: 4
- keyspace_name.table_name: 4
nodetool netstats shows that counters of Completed requests for Small/Gossip messages are increasing:
# nodetool netstats
Mode: JOINING
Bootstrap xxxx-81b554ae3baf
/XX.XX.XX.109
/XX.XX.XX.47
/XX.XX.XX.98
/XX.XX.XX.151
/XX.XX.XX.21
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name Active Pending Completed Dropped
Large messages n/a 0 0 0
Small messages n/a 0 571777 0
Gossip messages n/a 0 199190 0
nodetool tpstats shows that counters of Completed requests for CompactionExecutor,MigrationStage, GossipStage pools are increasing:
# nodetool tpstats
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 0 0 0
MiscStage 0 0 0 0 0
CompactionExecutor 0 0 251 0 0
MutationStage 0 0 571599 0 0
MemtableReclaimMemory 0 0 98 0 0
PendingRangeCalculator 0 0 7 0 0
GossipStage 0 0 185695 0 0
SecondaryIndexManagement 0 0 2 0 0
HintsDispatcher 0 0 0 0 0
RequestResponseStage 0 0 6 0 0
ReadRepairStage 0 0 0 0 0
CounterMutationStage 0 0 0 0 0
MigrationStage 0 0 14 0 0
MemtablePostFlush 0 0 148 0 0
PerDiskMemtableFlushWriter_0 0 0 98 0 0
ValidationExecutor 0 0 0 0 0
Sampler 0 0 0 0 0
MemtableFlushWriter 0 0 98 0 0
InternalResponseStage 0 0 11 0 0
ViewMutationStage 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
CacheCleanupExecutor 0 0 0 0 0
Message type Dropped
READ 0
RANGE_SLICE 0
_TRACE 0
HINT 0
MUTATION 124
COUNTER_MUTATION 0
BATCH_STORE 0
BATCH_REMOVE 0
REQUEST_RESPONSE 0
PAGED_RANGE 0
READ_REPAIR 0
So it looks like node is still receiving some data from another nodes and applying it but I don't know how to check the progress and should I wait or cancel bootstrap. I've already tried to re-bootstrap this node and got the following situation: node was in UJ state for a long time (16 hours) had some pending compaction and 99.9% of CPU idle. Also I've added nodes to cluster about a month ago and there wasn't any issues - nodes joined during 2-3 hour and became in UN state.
Also nodetool cleanup is running on one of existing nodes on this node I see the following warnings in system.log:
**WARN [STREAM-IN-/XX.XX.XX.32:46814] NoSpamLogger.java:94 log Spinning trying to capture readers [BigTableReader(path='/var/lib/cassandra/data/keyspace_name/table_name-6750375affa011e7bdc709b3eb0d8941/mc-1117-big-Data.db'), BigTableReader(path='/var/lib/cassandra/data/keyspace_name/table_name-6750375affa011e7bdc709b3eb0d8941/mc-1070-big-Data.db'), ...]**
Since cleanup is local procedure it cannot affect new node during bootstrap. But I can be wrong.
Any help will be appreciated.
Sometimes this can happen. Maybe there was an issue with gossip communicating that joining had completed, or maybe another node quickly reported as DN and disrupted the process.
When this happens, you have a couple of options:
You can always stop the node, wipe it, and try to join it again.
If you're sure that all (or most) of the data is there, you can stop the node, and add a line in the cassandra.yaml of auto_bootstrap: false. The node will start, join the cluster, and serve its data. For this option, it's usually a good idea to run a repair once the node is up.
Just Auto_bootstrap: false on cassandra.yaml of new node. and then restart the node. it will join as UN. After some time run full repair which will ensure the consistency.

Failed to add a new node to cassandra cluster

I have a cluster with four nodes, every node with 70G data.
When I add a new node to the cluster, it always
warns me about a tombstones problem like this:
WARN 09:38:03 Read 2578 live and 1114 tombstoned cells in xxxtable (see tombstone_warn_threshold).
10000 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808,
localDeletion=2147483647, ranges=[FAE69193423616A400258D99B9C0CCCFEC4A9547C1A1FC17BF569D2405705B8E:_-FAE69193423616A400258D99B9C0CCCFEC4A9547C1A1FC17BF569D2405705B8E:!,
deletedAt=1456243983944000, localDeletion=1456243983][FAE69193423616A40EC252766DDF513FBCA55ECDFAF452052E6C95D4BD641201:_-FAE69193423616A40EC252766DDF513FBCA55ECDFAF452052E6C95D4BD641201:!,
deletedAt=1460026357100000, localDeletion=1460026357][FAE69193423616A41BED8E613CD24BF3583FB6C6ABBA13F19C3E2D1824D01EF6:_-FAE69193423616A41BED8E613CD24BF3583FB6C6ABBA13F19C3E2D1824D01EF6:!, deletedAt=1458176745950000, localDeletion=1458176745][FAE69193423616A41BED8E613CD24BF3B06C1306E35B0ACA719D800D254E5930:_-FAE69193423616A41BED8E613CD24BF3B06C1306E35B0ACA719D800D254E5930:!, deletedAt=1458176745556000, localDeletion=1458176745][FAE69193423616A41BED8E613CD24BF3BA2AE7FC8340F96CC440BDDFFBCBE7D0:_-FAE69193423616A41BED8E613CD24BF3BA2AE7FC8340F96CC440BDDFFBCBE7D0:!,
deletedAt=1458176745740000, localDeletion=1458176745][FAE69193423616A41BED8E613CD24BF3E5A681C7ECC09A93429CEE59A76DA131:_-FAE69193423616A41BED8E613CD24BF3E5A681C7ECC09A93429CEE59A76DA131:!,
deletedAt=1458792793219000, localDeletion=
and finally it take a long time to start and throws
java.lang.OutOfMemoryError: Java heap space
Following is the error log:
INFO 20:39:20 ConcurrentMarkSweep GC in 5859ms. CMS Old Gen: 6491794984 -> 6492437040; Par Eden Space: 1398145024 -> 1397906216; Par Survivor Space: 349072992 -> 336156096
INFO 20:39:20 Enqueuing flush of refresh_token: 693 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 Pool Name Active Pending Completed Blocked All Time Blocked
INFO 20:39:20 Enqueuing flush of log_user_track: 7047 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 CounterMutationStage 0 0 0 0 0
INFO 20:39:20 Enqueuing flush of userinbox: 42819 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 Enqueuing flush of messages: 7954 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 ReadStage 0 0 0 0 0
INFO 20:39:20 RequestResponseStage 0 0 6 0 0
INFO 20:39:20 Enqueuing flush of sstable_activity: 6567 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 ReadRepairStage 0 0 0 0 0
INFO 20:39:20 Enqueuing flush of convmsgs: 2132 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 MutationStage 0 0 72300 0 0
INFO 20:39:20 Enqueuing flush of sstable_activity: 1791 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 GossipStage 0 0 23655 0 0
INFO 20:39:20 Enqueuing flush of log_user_track: 1165 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 AntiEntropyStage 0 0 0 0 0
INFO 20:39:20 Enqueuing flush of sstable_activity: 2388 (0%) on-heap, 0 (0%) off-heap
INFO 20:39:20 CacheCleanupExecutor 0 0 0 0 0
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid17155.hprof ...
When I run nodetool tpstats, I see the task of MemtableFlushWriter and MemtablePostFlush are pending a lot.
Pool Name Active Pending Completed Blocked All time blocked
CounterMutationStage 0 0 0 0 0
ReadStage 0 0 0 0 0
RequestResponseStage 0 0 8 0 0
MutationStage 0 0 1382245 0 0
ReadRepairStage 0 0 0 0 0
GossipStage 0 0 23553 0 0
CacheCleanupExecutor 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
MigrationStage 0 0 0 0 0
ValidationExecutor 0 0 0 0 0
CommitLogArchiver 0 0 0 0 0
MiscStage 0 0 0 0 0
MemtableFlushWriter 4 7459 220 0 0
MemtableReclaimMemory 0 0 231 0 0
PendingRangeCalculator 0 0 3 0 0
MemtablePostFlush 1 7464 331 0 0
CompactionExecutor 3 3 269 0 0
InternalResponseStage 0 0 0 0 0
HintedHandoff 0 0 4 0 0

Cassandra2.1 write slow in a 1TB data table

I am doing some test in a cassandra cluster,and now i have a table with 1TB data per node.When i used ycsb to do more insert operation,i found the throughput was really low(about 10000 ops/sec) comparing to a same,new table in the same cluster(about 80000 ops/sec).While inserting,the cpu usage was about 40%,and almost no disk usege.
I used nodetool tpstats to get task details,it showed :
Pool Name Active Pending Completed Blocked All time blocked
CounterMutationStage 0 0 0 0 0
ReadStage 0 0 102 0 0
RequestResponseStage 0 0 41571733 0 0
MutationStage 384 21949 82375487 0 0
ReadRepairStage 0 0 0 0 0
GossipStage 0 0 247100 0 0
CacheCleanupExecutor 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
MigrationStage 0 0 6 0 0
Sampler 0 0 0 0 0
ValidationExecutor 0 0 0 0 0
CommitLogArchiver 0 0 0 0 0
MiscStage 0 0 0 0 0
MemtableFlushWriter 16 16 4745 0 0
MemtableReclaimMemory 0 0 4745 0 0
PendingRangeCalculator 0 0 4 0 0
MemtablePostFlush 1 163 9394 0 0
CompactionExecutor 8 29 13713 0 0
InternalResponseStage 0 0 0 0 0
HintedHandoff 2 2 5 0 0
I found there was a large amount of pending MutationStage and MemtablePostFlush
I have read some related articles about cassandra write limitation,but no useful information.I want to know why there is a huge difference about cassandra throughput between two same tables except the data size?
In addition,i use ssd on my server.However,this phenomenon also occur in another cluster using hdd
When cassandra was running,i found the both %user and %nice on cpu utilization are about 10% while only compactiontask running with compaction throughput about 80MB/S.but i have been set nice value to 0 for my cassandra process.
Wild guess: your system is busy compacting the sstable.
Check it out with nodetool compactionstats
BTW, YCSB does not use prepare statement, which make it bad estimator for actual application load.

Upgrade from cassandra 2.1.4 to 2.1.5

Everyone
A few days ago I upgraded our 6 node EC2 cluster from cassandra 2.1.4 to 2.1.5.
Since then, all my nodes have "exploded" in their cpu usage - their at 100% cpu for much of the time and their load average is between 100-300 (!!!).
This did not start immediately after the upgrade. It started a few hours afterwards with one of the nodes, and slowly, more and more nodes began to exhibit the same behavior.
It seems to correlate with the compaction of our largest column family, and after the compaction is complete (~24 hours after it starts) it seems the node goes back to normal. It has only been 2 days or so, so I'm hoping it will not happen again, but I am still monitoring this.
Here are my questions
Is this a bug or an expected behavior?
If it is an expected behavior -
What is the explanation for this issue?
Is it documented somewhere that I have missed?
Should I do upgrades differently? Maybe 1 or 2 nodes at a time every 24 hours or so? What is the best practice?
If it is a bug -
Is it known?
Where should I report this? What data should I add?
Will downgrading back to 2.1.4 work?
Any feedback on this would be great
Thanks
Amir
Update:
This is the structure of the table in question.
CREATE TABLE tbl1 (
key text PRIMARY KEY,
created_at timestamp,
customer_id bigint,
device_id bigint,
event text,
fail_count bigint,
generation bigint,
gr_id text,
imei text,
raw_post text,
"timestamp" timestamp
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = 'NONE';
The logs do not reveal much (at least to me). Here's a snippet of how the logs look
INFO [WRITE-/10.0.1.142] 2015-05-23 05:43:42,577 YamlConfigurationLoader.java:92 - Loading settings from file:/etc/cassandra/cassandra.yaml
INFO [WRITE-/10.0.1.142] 2015-05-23 05:43:42,580 YamlConfigurationLoader.java:135 - Node configuration:[authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_rpc_address=10.0.2.145; cas_contention_timeout_in_ms=1000; client_encryption_options=; cluster_name=Gryphonet21 Cluster; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_directory=/data/cassandra/commitlog; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_period_in_ms=10000; compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_node_timeout=false; data_file_directories=[/data/cassandra/data]; disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_compression=all; key_cache_save_period=14400; key_cache_size_in_mb=null; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_tokens=16; partitioner=RandomPartitioner; permissions_validity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_caches_directory=/data/cassandra/saved_caches; seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, parameters=[{seeds=10.0.1.141,10.0.2.145,10.0.3.149}]}]; server_encryption_options=; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; write_request_timeout_in_ms=2000]
INFO [HANDSHAKE-/10.0.1.142] 2015-05-23 05:43:42,591 OutboundTcpConnection.java:494 - Cannot handshake version with /10.0.1.142
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,713 MessagingService.java:887 - 135 MUTATION messages dropped in last 5000ms
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,713 StatusLogger.java:51 - Pool Name Active Pending Completed Blocked All Time Blocked
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,714 StatusLogger.java:66 - CounterMutationStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,714 StatusLogger.java:66 - ReadStage 5 1 5702809 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,715 StatusLogger.java:66 - RequestResponseStage 0 45 29528010 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,715 StatusLogger.java:66 - ReadRepairStage 0 0 997 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,715 StatusLogger.java:66 - MutationStage 0 31 43404309 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,716 StatusLogger.java:66 - GossipStage 0 0 569931 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,716 StatusLogger.java:66 - AntiEntropyStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,716 StatusLogger.java:66 - CacheCleanupExecutor 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,717 StatusLogger.java:66 - MigrationStage 0 0 9 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,829 StatusLogger.java:66 - ValidationExecutor 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,830 StatusLogger.java:66 - Sampler 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,830 StatusLogger.java:66 - MiscStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,831 StatusLogger.java:66 - CommitLogArchiver 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,831 StatusLogger.java:66 - MemtableFlushWriter 1 1 1756 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,831 StatusLogger.java:66 - PendingRangeCalculator 0 0 11 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,832 StatusLogger.java:66 - MemtableReclaimMemory 0 0 1756 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,832 StatusLogger.java:66 - MemtablePostFlush 1 2 3819 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,832 StatusLogger.java:66 - CompactionExecutor 2 32 742 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,833 StatusLogger.java:66 - InternalResponseStage 0 0 0 0 0
INFO [HANDSHAKE-/10.0.1.142] 2015-05-23 05:43:45,086 OutboundTcpConnection.java:485 - Handshaking version with /10.0.1.142
UPDATE:
The issue persists. I thought after one compaction on each node finishes the node is back to normal, but it isn't. After a few hours, CPU jumps to 100% and load average in the areas of 100-300.
I am downgrading back to 2.1.4.
UPDATE:
Used phact's dumpThreads script to get stack traces. Also, tried using jvmtop, but it just seemed to hang.
The output is too big to paste here, but you can find it at http://downloads.gryphonet.com/cassandra/.
Username: cassandra
Password: cassandra
try using jvmtop to see what is the cassandra process doing. it has two modes, one to see current running threads and the other to show distribution of cpu per class procedure (--profile), paste both outputs here
Answering my own question -
We are using a very specific thrift API - describe_splits_ex, and that seems to cause the issue.
It is obvious when looking at all the stack traces of all the different threads while the cpu usage goes to 100%.
For us, it was easy to fix, because we use this api as an optimization, not a must, so we just stopped using it, and the issue went away.
However, this api is also used by the cassandra-hadoop connector (at least it used to in earlier versions), so I would test before upgrading to 2.1.5 if you are using the connector.
Not sure what change in 2.1.5 caused the issue, but I know it did not happen in 2.1.4 and happened consistently in 2.1.5.

Resources