I am using Spark's FP-growth algorithm. I was getting OOM errors when I was doing a collect, I then changed the code so that I can save the results in a text file on HDFS rather than collecting them on the driver node. Here is the related code:
// Model building:
val fpg = new FPGrowth()
.setMinSupport(0.01)
.setNumPartitions(10)
val model = fpg.run(transaction_distinct)
Here is a transformation that should give me RDD[Strings].
val mymodel = model.freqItemsets.map { itemset =>
val model_res = itemset.items.mkString("[", ",", "]") + ", " + itemset.freq
model_res
}
I then save the model results as. Unfortunately, this is really SLOW!!!
mymodel.saveAsTextFile("fpm_model")
I get these errors:
16/02/04 14:47:28 ERROR ErrorMonitor: AssociationError[akka.tcp://sparkDriver#ipaddress:46811] -> [akka.tcp://sparkExecutor#hostname:39720]: Error [Association failed with [akka.tcp://sparkExecutor#hostname:39720]][akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor#hostname:39720]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: hostname/ipaddress:39720] akka.event.Logging$Error$NoCause$
16/02/04 14:47:28 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(3, hostname, 58683)
16/02/04 14:47:28 INFO BlockManagerMaster: Removed 3 successfully in removeExecutor
16/02/04 14:47:28 ERROR ErrorMonitor: AssociationError [akka.tcp://sparkDriver#ipaddress:46811] ->[akka.tcp://sparkExecutor#hostname:39720]: Error [Association failed with [akka.tcp://sparkExecutor#hostname:39720]][akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor#hostname:39720]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: hostname/ipaddress:39720
Related
I encountered a problem when connecting to a NAS shared directory using spring-integration-smb.
The problem is that I was able to connect to another shared Nas directory but for the pre-prod Nas, I encountered this problem.
Also, the shared server administrator confirmed that both directories have the same configuration.
You will find below the stack encountered
07 mars 2022;14:49:50.702 [scheduling-1] WARN jcifs.smb.SmbTransportImpl - Disconnecting transport while still in use Transport12[NAS03/XXXXXXXX:445,state=5,signingEnforced=false,usage=1]: [SmbSession[credentials=XXXXXXXXXX,targetHost=nas03,targetDomain=null,uid=0,connectionState=2,usage=1]]
07 mars 2022;14:49:50.702 [scheduling-1] WARN jcifs.smb.SmbSessionImpl - Logging off session while still in use SmbSession[credentials=XXXXXXXXX,targetHost=nas03,targetDomain=null,uid=0,connectionState=3,usage=1]:[SmbTree[share=PPD,service=null,tid=4,inDfs=false,inDomainDfs=false,connectionState=0,usage=2]]
07 mars 2022;14:49:50.737 [scheduling-1] ERROR o.s.i.handler.LoggingHandler - org.springframework.messaging.MessagingException: Problem occurred while synchronizing '' to local directory; nested exception is org.springframework.messaging.MessagingException: Failure occurred while copying '/test.csv' from the remote to the local directory; nested exception is org.springframework.core.NestedIOException: Failed to read resource [/test.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:348)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:267)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:69)
at org.springframework.integration.endpoint.AbstractFetchLimitingMessageSource.doReceive(AbstractFetchLimitingMessageSource.java:47)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:142)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:212)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:444)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:413)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$4(AbstractPollingEndpoint.java:348)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$5(AbstractPollingEndpoint.java:341)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.messaging.MessagingException: Failure occurred while copying '/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv' from the remote to the local directory; nested exception is org.springframework.core.NestedIOException: Failed to read resource [/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyRemoteContentToLocalFile(AbstractInboundFileSynchronizer.java:551)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyFileToLocalDirectory(AbstractInboundFileSynchronizer.java:488)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyIfNotNull(AbstractInboundFileSynchronizer.java:403)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.transferFilesFromRemoteToLocal(AbstractInboundFileSynchronizer.java:386)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.lambda$synchronizeToLocalDirectory$0(AbstractInboundFileSynchronizer.java:342)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:452)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:341)
... 21 more
Caused by: org.springframework.core.NestedIOException: Failed to read resource [/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.smb.session.SmbSession.read(SmbSession.java:188)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyRemoteContentToLocalFile(AbstractInboundFileSynchronizer.java:545)
... 27 more
Caused by: jcifs.smb.SmbException: The parameter is incorrect.
at jcifs.smb.SmbTransportImpl.checkStatus2(SmbTransportImpl.java:1467)
at jcifs.smb.SmbTransportImpl.checkStatus(SmbTransportImpl.java:1578)
at jcifs.smb.SmbTransportImpl.sendrecv(SmbTransportImpl.java:1027)
at jcifs.smb.SmbTransportImpl.send(SmbTransportImpl.java:1549)
at jcifs.smb.SmbSessionImpl.send(SmbSessionImpl.java:409)
at jcifs.smb.SmbTreeImpl.send(SmbTreeImpl.java:472)
at jcifs.smb.SmbTreeConnection.send0(SmbTreeConnection.java:404)
at jcifs.smb.SmbTreeConnection.send(SmbTreeConnection.java:318)
at jcifs.smb.SmbTreeConnection.send(SmbTreeConnection.java:298)
at jcifs.smb.SmbTreeHandleImpl.send(SmbTreeHandleImpl.java:130)
at jcifs.smb.SmbTreeHandleImpl.send(SmbTreeHandleImpl.java:117)
at jcifs.smb.SmbFile.withOpen(SmbFile.java:1775)
at jcifs.smb.SmbFile.withOpen(SmbFile.java:1744)
at jcifs.smb.SmbFile.queryPath(SmbFile.java:793)
at jcifs.smb.SmbFile.exists(SmbFile.java:879)
at jcifs.smb.SmbFile.isFile(SmbFile.java:1102)
at org.springframework.integration.smb.session.SmbSession.read(SmbSession.java:182)
... 28 more
here is my code :
#Bean
public SmbSessionFactory smbSessionFactory() {
VaultResponse vaultResponse = vaultTemplate
.opsForKeyValue(vaultPath, VaultKeyValueOperationsSupport.KeyValueBackend.KV_2).get(vaultSecretsPath.toLowerCase());
SmbSessionFactory smbSession = new SmbSessionFactory();
smbSession.setHost(properties.getNasHost());
smbSession.setPort(properties.getNasPort());
smbSession.setDomain(properties.getNasDomain());
if (vaultResponse != null) {
Map<String, Object> data = vaultResponse.getData();
smbSession.setUsername(data != null && data.get("nasUsername") != null ? (String) data.get("nasUsername") : "");
smbSession.setPassword(data != null && data.get("nasPassword") != null ? (String) data.get("nasPassword") : "");
}
smbSession.setShareAndDir(properties.getNasShareAndDir());
smbSession.setReplaceFile(true);
smbSession.setSmbMinVersion(DialectVersion.SMB1);
smbSession.setSmbMaxVersion(DialectVersion.SMB311);
return smbSession;
}
thank you in advance,
I'm trying to setup a Kafka Connect Sink to collect data from a topic into a Cassandra Table using the Datastax connector : https://downloads.datastax.com/#akc
Running a standalone worker running directly on the broker, running Kafka 0.10.2.2-1 :
name=dse-sink
connector.class=com.datastax.kafkaconnector.DseSinkConnector
tasks.max=1
datastax-java-driver.advanced.protocol.version = V4
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
plugin.path=/usr/share/java/kafka-connect-dse/kafka-connect-dse-1.2.1.jar
topics=connect-test
contactPoints=172.16.0.48
loadBalancing.localDc=datacenter1
port=9042
ignoreErrors=true
topic.connect-test.cdrs.test.mapping= kafkakey=key, value=value
topic.connect-test.cdrs.test.consistencyLevel=LOCAL_QUORUM
But i have the following error :
2019-12-23 16:58:43,165] ERROR Task dse-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
java.lang.VerifyError: Bad return type
Exception Details:
Location:
com/fasterxml/jackson/databind/cfg/MapperBuilder.streamFactory()Lcom/fasterxml/jackson/core/TokenStreamFactory; #7: areturn
Reason:
Type 'com/fasterxml/jackson/core/JsonFactory' (current frame, stack[0]) is not assignable to 'com/fasterxml/jackson/core/TokenStreamFactory' (from method signature)
Current Frame:
bci: #7
flags: { }
locals: { 'com/fasterxml/jackson/databind/cfg/MapperBuilder' }
stack: { 'com/fasterxml/jackson/core/JsonFactory' }
Bytecode:
0x0000000: 2ab4 0002 b600 08b0
at com.fasterxml.jackson.databind.json.JsonMapper.builder(JsonMapper.java:114)
at com.datastax.dsbulk.commons.codecs.json.JsonCodecUtils.getObjectMapper(JsonCodecUtils.java:36)
at com.datastax.kafkaconnector.codecs.CodecSettings.init(CodecSettings.java:131)
at com.datastax.kafkaconnector.state.LifeCycleManager.lambda$buildInstanceState$9(LifeCycleManager.java:423)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1625)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at com.datastax.kafkaconnector.state.LifeCycleManager.buildInstanceState(LifeCycleManager.java:457)
at com.datastax.kafkaconnector.state.LifeCycleManager.lambda$startTask$0(LifeCycleManager.java:106)
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
at com.datastax.kafkaconnector.state.LifeCycleManager.startTask(LifeCycleManager.java:101)
at com.datastax.kafkaconnector.DseSinkTask.start(DseSinkTask.java:74)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:244)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:145)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
No additional error on cassandra or Kafka side.
I see active connection on the cassandra node but nothing arrive in the Keyspace.
Any idea why ?
Imho this is a problem caused by use of the JSON internal converters with BigDecimal data (see related SO question). As described in the following blog post, the internal.key.converter and internal.value.converter are deprecated since Kafka 2.0, and shouldn't be explicitly set. Can you comment out all internal. properties & re-try?
P.S. Also see how JSON + Decimal has changed in Kafka 2.4
I have the following application (I am starting and stopping spark) in Windows. I use Scala-IDE(Eclipse). I get "A master URL must be set in your configuration" error even though I have set it here. I use spark-2.4.4 version.
Can someone please help me to fix this issue.
import org.apache.spark._;
import org.apache.spark.sql._;
object SampleApp {
def main(args: Array[String]) {
val conf = new SparkConf()
.setMaster("local[*]")
.setAppName("Simple Application")
val sc = new SparkContext(conf)
sc.stop()
}
}
The error is:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/10/28 22:58:56 INFO SparkContext: Running Spark version 2.4.4
19/10/28 22:58:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/10/28 22:58:56 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:368)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$5(SparkSession.scala:935)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at com.spark.renga.SampleApp$.main(SampleApp.scala:8)
at com.spark.renga.SampleApp.main(SampleApp.scala)
19/10/28 22:58:56 ERROR Utils: Uncaught exception in thread main
java.lang.NullPointerException
at org.apache.spark.SparkContext.postApplicationEnd(SparkContext.scala:2416)
at org.apache.spark.SparkContext.$anonfun$stop$2(SparkContext.scala:1931)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1931)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:585)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$5(SparkSession.scala:935)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at com.spark.renga.SampleApp$.main(SampleApp.scala:8)
at com.spark.renga.SampleApp.main(SampleApp.scala)
19/10/28 22:58:56 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:368)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$5(SparkSession.scala:935)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at com.spark.renga.SampleApp$.main(SampleApp.scala:8)
at com.spark.renga.SampleApp.main(SampleApp.scala)
if you are using version 2.4.4 try this:
import org.apache.spark.sql.SparkSession
object SampleApp {
def main(args: Array[String]) {
val spark = SparkSession
.builder
.master("local[*]")
.appName("test")
.getOrCreate()
println(spark.sparkContext.version)
spark.stop()
}
}
So this is what I tried in Spark Shell.
scala> import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.hive.HiveContext
scala> import java.nio.file.Files
import java.nio.file.Files
scala> val hiveDir = Files.createTempDirectory("hive")
hiveDir: java.nio.file.Path = /var/folders/gg/g3hk6fcj4rxc6lb1qsvxc_vdxxwf28/T/hive5050481206678469338
scala> val hiveContext = new HiveContext(sc)
15/12/31 12:05:27 INFO HiveContext: Initializing execution hive, version 0.13.1
hiveContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext#6f959640
scala> hiveContext.sql(s"SET hive.metastore.warehouse.dir=${hiveDir.toUri}")
15/12/31 12:05:34 INFO HiveContext: Initializing HiveMetastoreConnection version 0.13.1 using Spark classes.
...
res0: org.apache.spark.sql.DataFrame = [: string]
scala> Seq("create database foo").foreach(hiveContext.sql)
15/12/31 12:05:42 INFO ParseDriver: Parsing command: create database foo
15/12/31 12:05:42 INFO ParseDriver: Parse Completed
...
15/12/31 12:05:43 INFO HiveMetaStore: 0: create_database: Database(name:foo, description:null, locationUri:null, parameters:null, ownerName:aa8y, ownerType:USER)
15/12/31 12:05:43 INFO audit: ugi=aa8y ip=unknown-ip-addr cmd=create_database: Database(name:foo, description:null, locationUri:null, parameters:null, ownerName:aa8y, ownerType:USER)
15/12/31 12:05:43 INFO HiveMetaStore: 0: get_database: foo
15/12/31 12:05:43 INFO audit: ugi=aa8y ip=unknown-ip-addr cmd=get_database: foo
15/12/31 12:05:43 ERROR RetryingHMSHandler: MetaException(message:Unable to create database path file:/user/hive/warehouse/foo.db, failed to create database foo)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database_core(HiveMetaStore.java:734)
...
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
15/12/31 12:05:43 ERROR DDLTask: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Unable to create database path file:/user/hive/warehouse/foo.db, failed to create database foo)
at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:248)
...
at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:242)
... 78 more
15/12/31 12:05:43 ERROR Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to create database path file:/user/hive/warehouse/foo.db, failed to create database foo)
15/12/31 12:05:43 ERROR ClientWrapper:
======================
HIVE FAILURE OUTPUT
======================
SET hive.metastore.warehouse.dir=file:///var/folders/gg/g3hk6fcj4rxc6lb1qsvxc_vdxxwf28/T/hive5050481206678469338/
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to create database path file:/user/hive/warehouse/foo.db, failed to create database foo)
======================
END HIVE FAILURE OUTPUT
======================
org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to create database path file:/user/hive/warehouse/foo.db, failed to create database foo)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:349)
...
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
scala>
But it doesn't seem to recognize the directory I am setting. I've removed content from the stacktrace since it was very verbose. The entire stacktrace is here.
I am not sure what I am doing wrong. Would appreciate any help provided.
I have the following code:
StringSerializer ss = StringSerializer.get();
String cf = "TEST";
CassandraHostConfigurator conf = new CassandraHostConfigurator("localhost:9160");
conf.setCassandraThriftSocketTimeout(40000);
conf.setExhaustedPolicy(ExhaustedPolicy.WHEN_EXHAUSTED_BLOCK);
conf.setRetryDownedHostsDelayInSeconds(5);
conf.setRetryDownedHostsQueueSize(128);
conf.setRetryDownedHosts(true);
conf.setLoadBalancingPolicy(new LeastActiveBalancingPolicy());
String key = Long.toString(System.currentTimeMillis());
Cluster cluster = HFactory.getOrCreateCluster("TestCluster", conf);
Keyspace keyspace = HFactory.createKeyspace("TestCluster", cluster);
Mutator<String> mutator = HFactory.createMutator(keyspace, StringSerializer.get()); int count = 0;
while (!"q".equals(new Scanner( System.in).next())) {
try{
mutator.insert(key, cf, HFactory.createColumn("column_" + count, "v_" + count, ss, ss));
count++;
} catch (Exception e) {
e.printStackTrace();
}
}
and I can write some values using it, but when I restart cassandra, it fails. Here is the log:
[15:11:07] INFO [CassandraHostRetryService ] Downed Host Retry service started with >queue size 128 and retry delay 5s
[15:11:07] INFO [JmxMonitor ] Registering JMX >me.prettyprint.cassandra.service_ASG:ServiceType=hector,MonitorType=hector
[15:11:17] ERROR [HThriftClient ] Could not flush transport (to be expected >if the pool is shutting down) in close for client: CassandraClient
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:98)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:26)
at >me.prettyprint.cassandra.connection.HConnectionManager.closeClient(HConnectionManager.java:308)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:257)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] ERROR [HConnectionManager ] MARK HOST AS DOWN TRIGGERED for host >localhost(127.0.0.1):9160
[15:11:17] ERROR [HConnectionManager ] Pool state on shutdown: >:{localhost(127.0.0.1):9160}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown triggered on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown complete on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [CassandraHostRetryService ] Host detected as down was added to retry queue: localhost(127.0.0.1):9160
[15:11:17] WARN [HConnectionManager ] Could not fullfill request on this host CassandraClient
[15:11:17] WARN [HConnectionManager ] Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:82)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:236)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:157)
at org.apache.cassandra.thrift.Cassandra$Client.send_set_keyspace(Cassandra.java:466)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:455)
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:78)
... 5 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Added host localhost(127.0.0.1):9160 to pool
You have set -
conf.setRetryDownedHostsDelayInSeconds(5);
Try to to wait after the restart for more than 5 seconds.
Also, you may need to upgrade.
What is the size thrift_max_message_length_in_mb you have set?
Kind regards.