I have a Cassandra cluster with two nodes with simple replication strategy.
Everything worked well until one of nodes crashed. I recovered the crashed node by cloning the remaining node virtual machine (so we cloned a file system), and updated the listening and RPC address.
Now I keep getting the following strange error.
When I am running each single node, everything is working well. But when I am starting the second node, the first one falls back with an error!
ERROR [Native-Transport-Requests-1] 2020-07-21 08:19:31,042 Message.java:693 - Unexpected exception during request; channel = [id: 0xc1935e7a, L:/192.168.40.15:9042 - R:/192.168.40.15:47980]
java.lang.AssertionError: null
at org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:1065) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:1079) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:107) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:3866) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:3852) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1914) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageProxy$RangeIterator.computeNext(StorageProxy.java:1992) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageProxy$RangeIterator.computeNext(StorageProxy.java:1962) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.11.4.jar:3.11.4]
at com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1149) ~[guava-18.0.jar:na]
at org.apache.cassandra.service.StorageProxy$RangeMerger.computeNext(StorageProxy.java:2014) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageProxy$RangeMerger.computeNext(StorageProxy.java:1999) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageProxy$RangeCommandIterator.computeNext(StorageProxy.java:2132) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.service.StorageProxy$RangeCommandIterator.computeNext(StorageProxy.java:2092) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:92) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:786) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:438) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:416) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:289) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:117) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:225) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:256) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:241) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116) ~[apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:566) [apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) [apache-cassandra-3.11.4.jar:3.11.4]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.44.Final.jar:4.0.44.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.0.44.Final.jar:4.0.44.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) [netty-all-4.0.44.Final.jar:4.0.44.Final]
at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348) [netty-all-4.0.44.Final.jar:4.0.44.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_252]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) [apache-cassandra-3.11.4.jar:3.11.4]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:114) [apache-cassandra-3.11.4.jar:3.11.4]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_252]
I'm using the following Cassandra version:
[cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | Native protocol v4]
Here are the configuration files:
cassandra.yaml
cluster_name: 'babelfish'
num_tokens: 256
hinted_handoff_enabled: true
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
cdc_enabled: false
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
thrift_prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.30.15, 192.168.40.15"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 192.168.40.15
start_native_transport: true
native_transport_port: 9042
start_rpc: false
rpc_address: 192.168.40.15
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
cross_node_timeout: false
endpoint_snitch: GossipingPropertyFileSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
enable_materialized_views: true
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
key_provider:
- class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
- keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
gc_warn_threshold_in_ms: 1000
back_pressure_enabled: false
back_pressure_strategy:
- class_name: org.apache.cassandra.net.RateBasedBackPressure
parameters:
- high_ratio: 0.90
factor: 5
flow: FAST
cassandra-rackdc.properties
# These properties are used with GossipingPropertyFileSnitch and will
# indicate the rack and dc for this node
dc=DC1
rack=RACK1
# Add a suffix to a datacenter name. Used by the Ec2Snitch and Ec2MultiRegionSnitch
# to append a string to the EC2 region name.
#dc_suffix=
# Uncomment the following line to make this snitch prefer the internal ip when possible, as the Ec2MultiRegionSnitch does.
# prefer_local=true
cassandra-topology.properties
# Cassandra Node IP=Data Center:Rack
192.168.30.15=DC1:RACK1
192.168.40.15=DC1:RACK1
# default for unknown nodes
default=DC1:r1
# Native IPv6 is supported, however you must escape the colon in the IPv6 Address
# Also be sure to comment out JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
# in cassandra-env.sh
# fe80\:0\:0\:0\:202\:b3ff\:fe1e\:8329=DC1:RAC3
What could be the origin of this error, and how could it be fixed?
If you cloned the virtual machine with all data, then you have all data of the first node, including the node's ID. To solve this problem, shutdown the 2nd node, delete all data from the data_file_directories and commit logs, leave only the first node as a seed node, and then start the 2nd node, so it will join the cluster as normal, and after this process finished, update the seed list (if you leave the 2nd node in the seed list, it won't join the cluster, but bootstrap a new cluster).
Related
Hello guys i need you help i am getting error in cassandra connection i.e connection refused
here is my cassandra.yaml file
cluster_name: 'Test Cluster'
num_tokens: 256
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
cdc_enabled: false
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
thrift_prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.84.36.89"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.84.36.89
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: 10.84.36.89
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
key_provider:
- class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
- keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
replica_filtering_protection:
cached_rows_warn_threshold: 2000
cached_rows_fail_threshold: 32000
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
gc_warn_threshold_in_ms: 1000
back_pressure_enabled: false
back_pressure_strategy:
- class_name: org.apache.cassandra.net.RateBasedBackPressure
parameters:
- high_ratio: 0.90
factor: 5
flow: FAST
enable_materialized_views: true
enable_sasi_indexes: true
previously i was using localhost instead of 10.84.36.89 that time also not working.
i did not see any log file inside /var/log/cassandra also
i am using cqlsh 10.84.36.89 & 10.84.36.89 9042 for connecting cassandra, but both giving me same error
please help me to fix i am using ubuntu 18 & cassandra version 3.11, i can't update my cassandra version because new version is not supporting php driver
notetool status:
ERROR 16:21:09,536 Cannot initialize un-mmaper. (Are you using a non-Oracle JVM?) Compacted data files will not be removed promptly. Consider using an Oracle JVM or using standard disk access mode
java.lang.NoSuchMethodError: 'sun.misc.Cleaner sun.nio.ch.DirectBuffer.cleaner()'
at org.apache.cassandra.io.util.FileUtils.<clinit>(FileUtils.java:75) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:880) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:216) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.tools.NodeTool.execute(NodeTool.java:184) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:56) ~[apache-cassandra-3.11.10.jar:3.11.10]
error: null
-- StackTrace --
java.lang.NullPointerException
at org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1995)
at org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:102)
at org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:60)
at org.apache.cassandra.io.util.FileUtils.<clinit>(FileUtils.java:81)
at org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:880)
at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:216)
at org.apache.cassandra.tools.NodeTool.execute(NodeTool.java:184)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:56)
nodetool info
ERROR 16:22:02,418 Cannot initialize un-mmaper. (Are you using a non-Oracle JVM?) Compacted data files will not be removed promptly. Consider using an Oracle JVM or using standard disk access mode
java.lang.NoSuchMethodError: 'sun.misc.Cleaner sun.nio.ch.DirectBuffer.cleaner()'
at org.apache.cassandra.io.util.FileUtils.<clinit>(FileUtils.java:75) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:880) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:216) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.tools.NodeTool.execute(NodeTool.java:184) ~[apache-cassandra-3.11.10.jar:3.11.10]
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:56) ~[apache-cassandra-3.11.10.jar:3.11.10]
error: null
-- StackTrace --
java.lang.NullPointerException
at org.apache.cassandra.config.DatabaseDescriptor.getDiskFailurePolicy(DatabaseDescriptor.java:1995)
at org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:102)
at org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:60)
at org.apache.cassandra.io.util.FileUtils.<clinit>(FileUtils.java:81)
at org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:880)
at org.apache.cassandra.tools.NodeTool.printHistory(NodeTool.java:216)
at org.apache.cassandra.tools.NodeTool.execute(NodeTool.java:184)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:56)
cassandra service status
● cassandra.service - LSB: distributed storage system for structured data
Loaded: loaded (/etc/init.d/cassandra; generated)
Active: active (exited) since Thu 2021-03-11 16:02:20 +0630; 20min ago
Docs: man:systemd-sysv-generator(8)
Process: 3604 ExecStop=/etc/init.d/cassandra stop (code=exited, status=0/SUCCESS)
Process: 3611 ExecStart=/etc/init.d/cassandra start (code=exited, status=0/SUCCESS)
မတ် 11 16:02:20 smslb1 systemd[1]: Starting LSB: distributed storage system for structured data...
မတ် 11 16:02:20 smslb1 systemd[1]: Started LSB: distributed storage system for structured data.
Based on the errors and symptoms you described, it doesn't look like Cassandra is running.
It's likely that you're not using a supported Java version. You will need to switch to Java 8 with at least update 40 (but newer releases are recommended).
Have a look at the pre-requisites I documented in Installing Cassandra for details. Cheers!
In our test environment, We have a 1 node cassandra cluster with RF=1 for all keyspaces.
VM arguments of interest are listed below
-XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn1G -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=1000003 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
We noticed Full GC happening frequently and cassandra getting unresponsive during GC.
INFO [Service Thread] 2016-12-29 15:52:40,901 GCInspector.java:252 - ParNew GC in 238ms. CMS Old Gen: 782576192 -> 802826248; Par Survivor Space: 60068168 -> 32163264
INFO [Service Thread] 2016-12-29 15:52:40,902 GCInspector.java:252 - ConcurrentMarkSweep GC in 1448ms. CMS Old Gen: 802826248 -> 393377248; Par Eden Space: 859045888 -> 0; Par Survivor Space: 32163264 -> 0
We are getting java.lang.OutOfMemoryError with below exception
ERROR [SharedPool-Worker-5] 2017-01-26 09:23:13,694 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) ~[na:1.7.0_80]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:331) ~[na:1.7.0_80]
at org.apache.cassandra.utils.memory.SlabAllocator.getRegion(SlabAllocator.java:137) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:97) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Memtable.put(Memtable.java:192) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1237) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:400) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:363) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:1033) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2224) ~[apache-cassandra-2.1.8.jar:2.1.8]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_80]
at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.1.8.jar:2.1.8]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
We were able restore the cassandra after executing nodetool repair.
nodetool status
Datacenter: DC1
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.3.211.3 5.74 GB 256 ? 32251391-5eee-4891-996d-30fb225116a1 RAC1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
nodetool info
ID : 32251391-5eee-4891-996d-30fb225116a1
Gossip active : true
Thrift active : true
Native Transport active: true
Load : 5.74 GB
Generation No : 1485526088
Uptime (seconds) : 330651
Heap Memory (MB) : 812.72 / 1945.63
Off Heap Memory (MB) : 7.63
Data Center : DC1
Rack : RAC1
Exceptions : 0
Key Cache : entries 68, size 6.61 KB, capacity 97 MB, 1158 hits, 1276 requests, 0.908 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests, NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 48 MB, 0 hits, 0 requests, NaN recent hit rate, 7200 save period in seconds
Token : (invoke with -T/--tokens to see all 256 tokens)
From System.log, I see lots of compacting large partitiion
WARN [CompactionExecutor:33463] 2016-12-24 05:42:29,550 SSTableWriter.java:240 - Compacting large partition mydb/Table_Name:2016-12-23 00:00+0530 (142735455 bytes)
WARN [CompactionExecutor:33465] 2016-12-24 05:47:57,343 SSTableWriter.java:240 - Compacting large partition mydb/Table_Name_2:22:0c2e6c00-a5a3-11e6-a05e-1f69f32db21c (162203393 bytes)
For Tombstone I notice below in system.log
[main] 2016-12-28 18:23:06,534 YamlConfigurationLoader.java:135 - Node
configuration:[authenticator=PasswordAuthenticator;
authorizer=CassandraAuthorizer; auto_snapshot=true;
batch_size_warn_threshold_in_kb=5;
batchlog_replay_throttle_in_kb=1024;
cas_contention_timeout_in_ms=1000;
client_encryption_options=; cluster_name=bankbazaar;
column_index_size_in_kb=64; commit_failure_policy=ignore;
commitlog_directory=/var/cassandra/log/commitlog;
commitlog_segment_size_in_mb=32; commitlog_sync=periodic;
commitlog_sync_period_in_ms=10000;
compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32;
concurrent_reads=32; concurrent_writes=32;
counter_cache_save_period=7200; counter_cache_size_in_mb=null;
counter_write_request_timeout_in_ms=15000; cross_node_timeout=false;
data_file_directories=[/cryptfs/sdb/cassandra/data,
/cryptfs/sdc/cassandra/data, /cryptfs/sdd/cassandra/data];
disk_failure_policy=best_effort; dynamic_snitch_badness_threshold=0.1;
dynamic_snitch_reset_interval_in_ms=600000;
dynamic_snitch_update_interval_in_ms=100;
endpoint_snitch=GossipingPropertyFileSnitch;
hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024;
incremental_backups=false; index_summary_capacity_in_mb=null;
index_summary_resize_interval_in_minutes=60;
inter_dc_tcp_nodelay=false; internode_compression=all;
key_cache_save_period=14400; key_cache_size_in_mb=null;
listen_address=127.0.0.1; max_hint_window_in_ms=10800000;
max_hints_delivery_threads=2; memtable_allocation_type=heap_buffers;
native_transport_port=9042; num_tokens=256;
partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
permissions_validity_in_ms=2000; range_request_timeout_in_ms=20000;
read_request_timeout_in_ms=10000;
request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
request_timeout_in_ms=20000; row_cache_save_period=0;
row_cache_size_in_mb=0; rpc_address=127.0.0.1; rpc_keepalive=true;
rpc_port=9160; rpc_server_type=sync;
saved_caches_directory=/var/cassandra/data/saved_caches;
seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider,
parameters=[{seeds=127.0.0.1}]}];
server_encryption_options=;
snapshot_before_compaction=false; ssl_storage_port=9001;
sstable_preemptive_open_interval_in_mb=50;
start_native_transport=true; start_rpc=true; storage_port=9000;
thrift_framed_transport_size_in_mb=15;
tombstone_failure_threshold=100000; tombstone_warn_threshold=1000;
trickle_fsync=false; trickle_fsync_interval_in_kb=10240;
truncate_request_timeout_in_ms=60000;
write_request_timeout_in_ms=5000]
nodetool tpstats
Pool Name Active Pending Completed Blocked All time blocked
CounterMutationStage 0 0 0 0 0
ReadStage 32 4061 50469243 0 0
RequestResponseStage 0 0 0 0 0
MutationStage 32 22 27665114 0 0
ReadRepairStage 0 0 0 0 0
GossipStage 0 0 0 0 0
CacheCleanupExecutor 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
MigrationStage 0 0 0 0 0
Sampler 0 0 0 0 0
ValidationExecutor 0 0 0 0 0
CommitLogArchiver 0 0 0 0 0
MiscStage 0 0 0 0 0
MemtableFlushWriter 0 0 7769 0 0
MemtableReclaimMemory 1 57 13433 0 0
PendingRangeCalculator 0 0 1 0 0
MemtablePostFlush 0 0 9279 0 0
CompactionExecutor 3 47 169022 0 0
InternalResponseStage 0 0 0 0 0
HintedHandoff 0 1 148 0 0
Is there any YAML/other config to be used to avoid "large compaction"
What is the correct Compaction Strategy to be used ? Can OutOfMemory because of wrong Compaction Strategy
In one of the keyspace we have write once and read multiple times for each row.
For another keyspace we have Timeseries kind of data where it's insert only and multiple reads
Seeing this: Heap Memory (MB): 812.72 / 1945.63 tells me that your 1 machine is probably under powered. There's a good chance that you're not able to keep up with GC.
While in this case, I think this is probably related to being undersized - access patterns, data model, and payload size can also affect GC so if you'd like to update your post with that info, I can update my answer to reflect that.
EDIT to reflect new information
Thanks for adding additional information. Based on what you posted, there are two immediate things I notice that can cause your heap to blow:
Large Partition:
It looks like compaction had to compact 2 partitions that exceeded 100mb (140 and 160 mb respectively). Normally, that would still be ok (not great) but because you're running on under powered hardware with such a small heap, that's quite a lot.
The thing about compaction
It uses a healthy mix of resources when it runs. It's business as usual so it's something you should test and plan for. In this case, I'm certain that compaction is working harder because of the large partition which is using CPU resources (that GC needs), heap, and IO.
This brings me to another concern:
Pool Name Active Pending Completed Blocked All time blocked
CounterMutationStage 0 0 0 0 0
ReadStage 32 4061 50469243 0 0
This is usually a sign that you either need to scale up and/or scale out. In your case, you might want to do both. You can exhaust a single, under-powered node pretty quickly with an un-optimized data model. You also don't get to experience the nuances of a distributed system when you test in a single node environment.
So the TL;DR:
For a read heavy workload (which this seems to be), you'll need a larger heap. For over all sanity and cluster health, you'll need to revisit your data model to make sure the partitioning logic is sound. If you're not sure about how or why to do either, I suggest spending some time here: https://academy.datastax.com/courses
I'm experiencing node crashes where system.logfile is showing bunch of 'ReadTimeoutException' hitting 500ms.
cassandra.yaml file has setting for [read_request_timeout_in_ms: 10000]
can you folks please share how i can address these timeout! Thanks in advance!
error stack:
ERROR [SharedPool-Worker-241] 2017-02-01 13:18:27,663 Message.java:611 - Unexpected exception during request; channel = [id: 0x5d8abf33, /172.18.30.62:47580 => /216.12.225.9:9042]
java.lang.RuntimeException: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:497) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:306) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.ClientState.login(ClientState.java:269) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) [apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) [apache-cassandra-2.2.8.jar:2.2.8]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) [apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.2.8.jar:2.2.8]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:147) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1441) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1365) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1282) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:176) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:505) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493) ~[apache-cassandra-2.2.8.jar:2.2.8]
... 13 common frames omitted
INFO [ScheduledTasks:1] 2017-02-01 13:18:27,682 MessagingService.java:946 - READ messages were dropped in last 5000 ms: 149 for internal timeout and 0 for cross node timeout
INFO [Service Thread] 2017-02-01 13:18:27,693 StatusLogger.java:106 - enterprise.t_sf_venue_test 0,0
INFO [ScheduledTasks:1] 2017-02-01 13:18:27,699 MessagingService.java:946 - REQUEST_RESPONSE messages were dropped in last 5000 ms: 7 for internal timeout and 0 for cross node timeout
INFO [Service Thread] 2017-02-01 13:18:27,699 StatusLogger.java:106 - enterprise.alestnstats 0,0
INFO [ScheduledTasks:1] 2017-02-01 13:18:27,699 MessagingService.java:946 - RANGE_SLICE messages were dropped in last 5000 ms: 116 for internal timeout and 0 for cross node timeout
As you see in your logs, actually the failing query is not the one you are trying to execute.
the failing query is internal to cassandra:
"SELECT * FROM system_auth.roles;"
These internal cassandra queries(misc queries) does not use 'read_request_timeout_in_ms'. Instead, it uses 'request_timeout_in_ms'.
I added a new node to my cassandra cluster (the new node is not a seed node). I now have 3 nodes on my cluster :
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN XXX.XXX.XXX.XXX 52.25 GB 256 100.0% XXX rack1
UN XXX.XXX.XXX.XXX 63.65 GB 256 100.0% XXX rack1
UN XXX.XXX.XXX.XXX 314.72 MB 256 100.0% XXX rack1
I have a replication factor of 3 :
DESCRIBE KEYSPACE mykeyspace
CREATE KEYSPACE mykeyspace WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '3'} AND durable_writes = true;
but the data is not replicated on the new cluster (node with 314 MB of data).
I tried to use nodetool rebuild :
ERROR [STREAM-IN-/XXX.XXX.XXX.XXX] 2016-11-11 08:28:42,765
StreamSession.java:520 - [Stream
#0e7a0580-a81b-11e6-9a1c-6d75503d5d02] Streaming error occurred java.lang.IllegalArgumentException: Unknown type 0 at
org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
~[apache-cassandra-3.1.1.jar:3.1.1] at
java.lang.Thread.run(Thread.java:745) [na:1.8.0_74] ERROR [Thread-16]
2016-11-11 08:28:42,765 CassandraDaemon.java:195 - Exception in thread
Thread[Thread-16,5,RMI Runtime] java.lang.RuntimeException:
java.lang.InterruptedException at
com.google.common.base.Throwables.propagate(Throwables.java:160)
~[guava-18.0.jar:na] at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
~[apache-cassandra-3.1.1.jar:3.1.1] at
java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74] Caused by:
java.lang.InterruptedException: null at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
~[na:1.8.0_74] at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
~[na:1.8.0_74] at
java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353)
~[na:1.8.0_74] at
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:184)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
~[apache-cassandra-3.1.1.jar:3.1.1] ... 1 common frames omitted INFO
[STREAM-IN-/XXX.XXX.XXX.XXX] 2016-11-11 08:28:42,805
StreamResultFuture.java:182 - [Stream
#0e7a0580-a81b-11e6-9a1c-6d75503d5d02] Session with /XXX.XXX.XXX.XXX is complete WARN [STREAM-IN-/XXX.XXX.XXX.XXX] 2016-11-11 08:28:42,807
StreamResultFuture.java:209 - [Stream
#0e7a0580-a81b-11e6-9a1c-6d75503d5d02] Stream failed ERROR [RMI TCP Connection(14)-127.0.0.1] 2016-11-11 08:28:42,808
StorageService.java:1128 - Error while rebuilding node
org.apache.cassandra.streaming.StreamException: Stream failed at
org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
~[apache-cassandra-3.1.1.jar:3.1.1] at
com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
~[guava-18.0.jar:na] at
org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:210)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:186)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:430)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:525)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:279)
~[apache-cassandra-3.1.1.jar:3.1.1] at
java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74]
I also tried to change the option but the data is still not copied to the new node :
auto_bootstrap: true
Could you please help me understand why the data is not replicated on the new node ?
Please let me know if you need further information from my configuration.
Thank you for your help
It appears (from https://issues.apache.org/jira/browse/CASSANDRA-10448) that this is due to CASSANDRA-10961. Applying that fix should address it.
While starting cassandra I am getting below error:
INFO 15:31:15 Completed flushing /home/sandeep/bck_up/data/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-tmp-ka-15-Data.db (0.000KiB) for commitlog position ReplayPosition(segmentId=1446651072594, position=106127)
INFO 15:31:15 Node localhost/127.0.0.1 state jump to normal
INFO 15:31:15 Netty using native Epoll event loop
ERROR 15:31:15 Exception encountered during startup
java.lang.NullPointerException: null
at org.apache.cassandra.transport.Server.run(Server.java:171) ~[apache-cassandra-2.1.11.jar:2.1.11]
at org.apache.cassandra.transport.Server.start(Server.java:117) ~[apache-cassandra-2.1.11.jar:2.1.11]
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:492) [apache-cassandra-2.1.11.jar:2.1.11]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:575) [apache-cassandra-2.1.11.jar:2.1.11]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) [apache-cassandra-2.1.11.jar:2.1.11]
java.lang.NullPointerException
at org.apache.cassandra.transport.Server.run(Server.java:171)
at org.apache.cassandra.transport.Server.start(Server.java:117)
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:492)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:575)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651)
Exception encountered during startup: null
INFO 15:31:15 Announcing shutdown
INFO 15:31:15 Node localhost/127.0.0.1 state jump to normal
INFO 15:31:17 Waiting for messaging service to quiesce
INFO 15:31:17 MessagingService has terminated the accept() thread
++++++
Below is my config File:
num_tokens: 256
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /home/sandeep/bck_up/data/cassandra/data
# commitlog_directory: /var/lib/cassandra/commitlog
commitlog_directory: /home/sandeep/bck_up/data/cassandra/commit_logs
disk_failure_policy: stop
commit_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /home/sandeep/bck_up/data/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "127.0.0.1"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: localhost
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: localhost
# port for Thrift to listen for clients on
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
column_index_size_in_kb: 64
batch_size_warn_threshold_in_kb: 5
compaction_throughput_mb_per_sec: 16
compaction_large_partition_warning_threshold_mb: 100
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
#server_encryption_options:
#internode_encryption: none
#keystore: conf/.keystore
#keystore_password: cassandra
#truststore: conf/.truststore
#truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
# require_client_auth: false
# enable or disable client/server encryption.
client_encryption_options:
#enabled: false
#keystore: conf/.keystore
#keystore_password: cassandra
# require_client_auth: false
# Set trustore and truststore_password if require_client_auth is true
# truststore: conf/.truststore
# truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
internode_compression: all
inter_dc_tcp_nodelay: false
++++
Can someone please help me out whats wrong with the setup. I've installed cassandra 2.1 on Fedora-16 64 bit. Java version is : java version "1.8.0_60"
That's a really odd place to get an NPE.
https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/transport/Server.java#L163-L171
I'd suggest you open a bug report at https://issues.apache.org/jira/browse/CASSANDRA/