I added a new node to my cassandra cluster (the new node is not a seed node). I now have 3 nodes on my cluster :
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN XXX.XXX.XXX.XXX 52.25 GB 256 100.0% XXX rack1
UN XXX.XXX.XXX.XXX 63.65 GB 256 100.0% XXX rack1
UN XXX.XXX.XXX.XXX 314.72 MB 256 100.0% XXX rack1
I have a replication factor of 3 :
DESCRIBE KEYSPACE mykeyspace
CREATE KEYSPACE mykeyspace WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '3'} AND durable_writes = true;
but the data is not replicated on the new cluster (node with 314 MB of data).
I tried to use nodetool rebuild :
ERROR [STREAM-IN-/XXX.XXX.XXX.XXX] 2016-11-11 08:28:42,765
StreamSession.java:520 - [Stream
#0e7a0580-a81b-11e6-9a1c-6d75503d5d02] Streaming error occurred java.lang.IllegalArgumentException: Unknown type 0 at
org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
~[apache-cassandra-3.1.1.jar:3.1.1] at
java.lang.Thread.run(Thread.java:745) [na:1.8.0_74] ERROR [Thread-16]
2016-11-11 08:28:42,765 CassandraDaemon.java:195 - Exception in thread
Thread[Thread-16,5,RMI Runtime] java.lang.RuntimeException:
java.lang.InterruptedException at
com.google.common.base.Throwables.propagate(Throwables.java:160)
~[guava-18.0.jar:na] at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
~[apache-cassandra-3.1.1.jar:3.1.1] at
java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74] Caused by:
java.lang.InterruptedException: null at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
~[na:1.8.0_74] at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
~[na:1.8.0_74] at
java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353)
~[na:1.8.0_74] at
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:184)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
~[apache-cassandra-3.1.1.jar:3.1.1] ... 1 common frames omitted INFO
[STREAM-IN-/XXX.XXX.XXX.XXX] 2016-11-11 08:28:42,805
StreamResultFuture.java:182 - [Stream
#0e7a0580-a81b-11e6-9a1c-6d75503d5d02] Session with /XXX.XXX.XXX.XXX is complete WARN [STREAM-IN-/XXX.XXX.XXX.XXX] 2016-11-11 08:28:42,807
StreamResultFuture.java:209 - [Stream
#0e7a0580-a81b-11e6-9a1c-6d75503d5d02] Stream failed ERROR [RMI TCP Connection(14)-127.0.0.1] 2016-11-11 08:28:42,808
StorageService.java:1128 - Error while rebuilding node
org.apache.cassandra.streaming.StreamException: Stream failed at
org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
~[apache-cassandra-3.1.1.jar:3.1.1] at
com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
~[guava-18.0.jar:na] at
com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
~[guava-18.0.jar:na] at
org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:210)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:186)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:430)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:525)
~[apache-cassandra-3.1.1.jar:3.1.1] at
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:279)
~[apache-cassandra-3.1.1.jar:3.1.1] at
java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_74]
I also tried to change the option but the data is still not copied to the new node :
auto_bootstrap: true
Could you please help me understand why the data is not replicated on the new node ?
Please let me know if you need further information from my configuration.
Thank you for your help
It appears (from https://issues.apache.org/jira/browse/CASSANDRA-10448) that this is due to CASSANDRA-10961. Applying that fix should address it.
Related
When I'm starting a crawl using Nutch 1.15 with this:
/usr/local/nutch/bin/crawl --i -s urls/seed.txt crawldb 5
Then it starts to run and I get this error when it tries to fetch:
2019-02-10 15:29:32,021 INFO mapreduce.Job - Running job: job_local1267180618_0001
2019-02-10 15:29:32,145 INFO fetcher.FetchItemQueues - Using queue mode : byHost
2019-02-10 15:29:32,145 INFO fetcher.Fetcher - Fetcher: threads: 50
2019-02-10 15:29:32,145 INFO fetcher.Fetcher - Fetcher: time-out divisor: 2
2019-02-10 15:29:32,149 INFO fetcher.QueueFeeder - QueueFeeder finished: total 1 records hit by time limit : 0
2019-02-10 15:29:32,234 WARN mapred.LocalJobRunner - job_local1267180618_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.NullPointerException
at org.apache.nutch.net.URLExemptionFilters.<init>(URLExemptionFilters.java:39)
at org.apache.nutch.fetcher.FetcherThread.<init>(FetcherThread.java:154)
at org.apache.nutch.fetcher.Fetcher$FetcherRun.run(Fetcher.java:222)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2019-02-10 15:29:33,023 INFO mapreduce.Job - Job job_local1267180618_0001 running in uber mode : false
2019-02-10 15:29:33,025 INFO mapreduce.Job - map 0% reduce 0%
2019-02-10 15:29:33,028 INFO mapreduce.Job - Job job_local1267180618_0001 failed with state FAILED due to: NA
2019-02-10 15:29:33,038 INFO mapreduce.Job - Counters: 0
2019-02-10 15:29:33,039 ERROR fetcher.Fetcher - Fetcher job did not succeed, job status:FAILED, reason: NA
2019-02-10 15:29:33,039 ERROR fetcher.Fetcher - Fetcher: java.lang.RuntimeException: Fetcher job did not succeed, job status:FAILED, reason: NA
at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:503)
at org.apache.nutch.fetcher.Fetcher.run(Fetcher.java:543)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.fetcher.Fetcher.main(Fetcher.java:517)
And I get this error in the console which is the command it runs:
Error running:
/usr/local/nutch/bin/nutch fetch -D mapreduce.job.reduces=2 -D mapred.child.java.opts=-Xmx1000m -D mapreduce.reduce.speculative=false -D mapreduce.map.speculative=false -D mapreduce.map.output.compress=true -D fetcher.timelimit.mins=180 crawlsites/segments/20190210152929 -noParsing -threads 50
I had to delete the nutch folder and do a new install and it worked after this.
I'm experiencing node crashes where system.logfile is showing bunch of 'ReadTimeoutException' hitting 500ms.
cassandra.yaml file has setting for [read_request_timeout_in_ms: 10000]
can you folks please share how i can address these timeout! Thanks in advance!
error stack:
ERROR [SharedPool-Worker-241] 2017-02-01 13:18:27,663 Message.java:611 - Unexpected exception during request; channel = [id: 0x5d8abf33, /172.18.30.62:47580 => /216.12.225.9:9042]
java.lang.RuntimeException: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:497) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:306) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.ClientState.login(ClientState.java:269) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) [apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) [apache-cassandra-2.2.8.jar:2.2.8]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_111]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) [apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.2.8.jar:2.2.8]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:110) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:147) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1441) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1365) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1282) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:176) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:505) ~[apache-cassandra-2.2.8.jar:2.2.8]
at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493) ~[apache-cassandra-2.2.8.jar:2.2.8]
... 13 common frames omitted
INFO [ScheduledTasks:1] 2017-02-01 13:18:27,682 MessagingService.java:946 - READ messages were dropped in last 5000 ms: 149 for internal timeout and 0 for cross node timeout
INFO [Service Thread] 2017-02-01 13:18:27,693 StatusLogger.java:106 - enterprise.t_sf_venue_test 0,0
INFO [ScheduledTasks:1] 2017-02-01 13:18:27,699 MessagingService.java:946 - REQUEST_RESPONSE messages were dropped in last 5000 ms: 7 for internal timeout and 0 for cross node timeout
INFO [Service Thread] 2017-02-01 13:18:27,699 StatusLogger.java:106 - enterprise.alestnstats 0,0
INFO [ScheduledTasks:1] 2017-02-01 13:18:27,699 MessagingService.java:946 - RANGE_SLICE messages were dropped in last 5000 ms: 116 for internal timeout and 0 for cross node timeout
As you see in your logs, actually the failing query is not the one you are trying to execute.
the failing query is internal to cassandra:
"SELECT * FROM system_auth.roles;"
These internal cassandra queries(misc queries) does not use 'read_request_timeout_in_ms'. Instead, it uses 'request_timeout_in_ms'.
In one of our single node Cassadra deployment, there's this table schema:
CREATE table CTS_SVC_PT_INT_READ (
svc_pt_id bigint,
meas_type_id bigint,
value double,
flags bigint,
read_time timestamp,
last_upd_time timestamp,
PRIMARY KEY (svc_pt_id, meas_type_id, read_time)
) WITH CLUSTERING ORDER BY (meas_type_id ASC, read_time DESC)
AND compaction = {
'class': 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy',
'timestamp_resolution': 'MILLISECONDS',
'base_time_seconds': '3600',
'max_sstable_age_days': '365'
};
While querying select distinct svc_pt_id from cts.CTS_SVC_PT_INT_READ through the Java client, it's failing with the exception:
select distinct svc_pt_id from cts.CTS_SVC_PT_INT_READ com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at rx.internal.operators.OnSubscribeToObservableFuture$ToObservableFuture.call(OnSubscribeToObservableFuture.java:74)
at rx.internal.operators.OnSubscribeToObservableFuture$ToObservableFuture.call(OnSubscribeToObservableFuture.java:43)
at rx.Observable.unsafeSubscribe(Observable.java:8314)
at rx.internal.operators.OperatorSubscribeOn$1.call(OperatorSubscribeOn.java:94)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:95)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:128)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
... 1 more
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:76)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:266)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:246)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
... 15 more
I see the same error if I issue this cql command through cqlsh. Is it due to a ReadTimeOut issue or something else?
"ReadFailureException: Cassandra failure ... 0 replica responded, 1 failed)" indicates a failure, not a timeout. You might learn something further by looking at the cassandra log on the server.
I'm having issues connecting to cassandra with mutagen/astyanax.
CassandraMutagen mutagen = new CassandraMutagenImpl();
mutagen.initialize("/mutations");
AstyanaxContext<Keyspace> ctx = new AstyanaxContext.Builder()
.forKeyspace("anser")
.withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("aConfig")
.setSeeds("localhost")
.setPort(9160))
.withAstyanaxConfiguration(
new AstyanaxConfigurationImpl()
.setConnectionPoolType(ConnectionPoolType.TOKEN_AWARE)
.setDiscoveryType(NodeDiscoveryType.NONE))
.buildKeyspace(ThriftFamilyFactory.getInstance());
ctx.start();
Keyspace keyspace = ctx.getClient();
Plan.Result<Integer> result = mutagen.mutate(keyspace);
And when i try to connect i get the exception:
com.toddfast.mutagen.MutagenException: Could not create column family "schema_version"
at com.toddfast.mutagen.cassandra.CassandraSubject.getCurrentState(CassandraSubject.java:79)
at com.toddfast.mutagen.cassandra.CassandraCoordinator.accept(CassandraCoordinator.java:48)
at com.toddfast.mutagen.basic.BasicPlanner.getPlan(BasicPlanner.java:66)
at com.toddfast.mutagen.cassandra.impl.CassandraPlanner.getPlan(CassandraPlanner.java:153)
at com.toddfast.mutagen.cassandra.impl.CassandraMutagenImpl.mutate(CassandraMutagenImpl.java:100)
at com.salesforce.analytics.cc.CassandraChangeControlTests2.test_mutagen(CassandraChangeControlTests2.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: com.netflix.astyanax.connectionpool.exceptions.TransportException: TransportException: [host=localhost(127.0.0.1):9160, latency=2(2), attempts=1]org.apache.thrift.transport.TTransportException: Frame size (352518912) larger than max length (16384000)!
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:197)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:137)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:338)
at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.executeDdlOperation(ThriftKeyspaceImpl.java:511)
at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.internalCreateColumnFamily(ThriftKeyspaceImpl.java:790)
at com.netflix.astyanax.thrift.ThriftKeyspaceImpl.createColumnFamily(ThriftKeyspaceImpl.java:580)
at com.toddfast.mutagen.cassandra.CassandraSubject.createSchemaVersionTable(CassandraSubject.java:53)
at com.toddfast.mutagen.cassandra.CassandraSubject.getCurrentState(CassandraSubject.java:76)
... 29 more
Caused by: org.apache.thrift.transport.TTransportException: Frame size (352518912) larger than max length (16384000)!
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.cassandra.thrift.Cassandra$Client.recv_set_keyspace(Cassandra.java:608)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:595)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:125)
... 36 more
I tried changing the frame size settings in my cassandra.yaml to
thrift_framed_transport_size_in_mb: 360
thrift_max_message_length_in_mb: 361
And can see this reflected in the startup logs. However, I still get the same framesize exception.
My cassandra is 2.1.12, and my deps are
<dependency org="com.toddfast.mutagen" name="mutagen-cassandra" rev="0.4.0" conf="master->default"/>
<dependency org="org.apache.cassandra" name="cassandra-all" rev="2.1.12" conf="master->default"/>
Any suggestions?
Ah I see that you also have to set it client side, and the version of astyanax that mutagen-cassandra is built with (1.56.44) is too old to allow configuration going to rebuild from source!
See http://mail-archives.apache.org/mod_mbox/cassandra-user/201504.mbox/%3CCAORswtx9UJvW6aexT1N6tuxzCLBVJL7JCXfJ=DUX3cvMSsgiJw#mail.gmail.com%3E
I'm trying to load data by sstableloader in cassandra 2.0.7.
The terminal shows progress 100%. I check netstats by nodetool netstats
It shows:
Mode: NORMAL
Bulk Load 21d7d610-a5f2-11e5-baa7-8fc95be03ac4
/10.19.150.70
Receiving 4 files, 138895248 bytes total
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-8-Data.db 67039680/67039680 bytes(100%) received from /10.19.150.70
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-10-Data.db 3074549/3074549 bytes(100%) received from /10.19.150.70
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-9-Data.db 43581052/43581052 bytes(100%) received from /10.19.150.70
/root/data/whatyyun/metadata/whatyyun-metadata-tmp-jb-7-Data.db 25199967/25199967 bytes(100%) received from /10.19.150.70
Read Repair Statistics:
Attempted: 0
Mismatch (Blocking): 0
Mismatch (Background): 0
Pool Name Active Pending Completed
Commands n/a 0 0
Responses n/a 0 11671
The sstableloader hangs for hours. I check the log there is an error that may concerns.
ERROR [CompactionExecutor:7] 2015-12-19 09:45:53,811 CassandraDaemon.java (line 198) Exception in thread Thread[CompactionExecutor:7,1,main]
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:532)
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139)
at org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:62)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:51)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:31)
at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:44)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36)
at java.util.concurrent.ConcurrentSkipListMap.findNode(ConcurrentSkipListMap.java:804)
at java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:828)
at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215)
at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
at org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:107)
at org.apache.cassandra.db.index.SecondaryIndexManager.indexRow(SecondaryIndexManager.java:441)
at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:407)
at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:833)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR [NonPeriodicTasks:1] 2015-12-19 09:45:53,812 CassandraDaemon.java (line 198) Exception in thread Thread[NonPeriodicTasks:1,5,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:413)
at org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndexes(SecondaryIndexManager.java:142)
at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:113)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409)
... 9 more
Caused by: java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:532)
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139)
at org.apache.cassandra.db.marshal.TimeUUIDType.compareTimestampBytes(TimeUUIDType.java:62)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:51)
at org.apache.cassandra.db.marshal.TimeUUIDType.compare(TimeUUIDType.java:31)
at org.apache.cassandra.dht.LocalToken.compareTo(LocalToken.java:44)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:85)
at org.apache.cassandra.db.DecoratedKey.compareTo(DecoratedKey.java:36)
at java.util.concurrent.ConcurrentSkipListMap.findNode(ConcurrentSkipListMap.java:804)
at java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:828)
at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215)
at java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1626)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:215)
at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:893)
at org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:107)
at org.apache.cassandra.db.index.SecondaryIndexManager.indexRow(SecondaryIndexManager.java:441)
at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:407)
at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:833)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more
The scheme of the table is as follows:
CREATE TABLE metadata (
userid timeuuid,
dirname text,
basename text,
ctime timestamp,
fileid timeuuid,
imagefileid timeuuid,
imagefilesize int,
mtime timestamp,
nodetype int,
showname text,
size bigint,
timelong text,
PRIMARY KEY (userid, dirname, basename, ctime)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
CREATE INDEX idx_fileid ON metadata (fileid);
CREATE INDEX idx_nodetype ON metadata (nodetype);
Can I kill the process of the sstableloader safely? Has this bulk load process finished?
You should try to increase heap for SSTABLELoader.
vim $(which sstableloader)
#########"$JAVA" $JAVA_AGENT -ea -cp "$CLASSPATH" $JVM_OPTS -Xmx$MAX_HEAP_SIZE \
"$JAVA" $JAVA_AGENT -ea -cp "$CLASSPATH" $JVM_OPTS -XX:+UseG1GC -Xmx10G -Xms10G -XX:+UseTLAB -XX:+ResizeTLAB \
-Dcassandra.storagedir="$cassandra_storagedir" \
-Dlogback.configurationFile=logback-tools.xml \
org.apache.cassandra.tools.BulkLoader "$#"
I hope that would solve your issue.
Your node must be running out of resources may be due to heavy load or any other process.
Try restarting Cassandra on the nodes running on high load and see if that helps.
From the above error it seems you are running some resource crunch. So you need to tune some setting on memory side like Xmx,Xms Which indicates min and max heap size in cassandra-env.sh file for lower version of Cassandra(i.e 2.x)
After tuning above you need to restart you node/cluster and try loading again.