Invalid null value for partition key part url - apache-spark

I have the following code which is trying to join 2 cassandra tables in spark.
val imageKeywords = sc.cassandraTable[ImageMetadata]("images", "metadata")
val imageAndPageKeywords = imageKeywords
.joinWithCassandraTable[PagesMetadata]("pages2", "metadata")
.on(SomeColumns("tid", "url" as "pu"))
The case classes I am using to map data are as below
case class ImageMetadata(tid: String, iu: String, pu: Option[String],
mk: List[String], fk: List[String], ak: List[String], ipk: List[String], pk: List[String], ik: List[String], ck: List[String])
case class PagesMetadata(tid: String, url: String, pk: List[String], uk: List[String], hk: List[String], ok: List[String], tc: List[String])
I get an error when I try to do some operations like below
imageAndPageKeywords.collect.toList.sortBy(_._1.tid).take(10).foreach(println)
error stacktrace -
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid null value for partition key part url
at com.datastax.driver.core.Responses$Error.asException(Responses.java:103)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:140)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:293)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:455)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:734)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.handler.timeout.IdleStateAwareChannelUpstreamHandler.handleUpstream(IdleStateAwareChannelUpstreamHandler.java:36)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more

Simple, the exception tells you that it cannot perform the join because the column used to join ImageMetadata with PagesMetadata are null.
In your case, some url (pu) values in ImageMetadata are null.
What is strange is that you define the PagesMetadata with url nullable (Option[String]) and it seems that it is part of the table's primary key
One solution to make it work would be:
val imageAndPageKeywords = imageKeywords
.filter(im -> im.pu.isDefined)
.joinWithCassandraTable[PagesMetadata]("pages2", "metadata")
.on(SomeColumns("tid", "url" as "pu"))

Related

Why cqlsh "CREATE" statement creates a "uuid" field as "timeuuid"?

Cassandra 3.11.0
running inside Ubuntu 16.04.4, JRE-8
I am trying to create a table with uuid field as follows :
CREATE TABLE IF NOT EXISTS policy (
tenant_id text,
policy_id uuid,
name text,
enabled boolean,
creation_time bigint,
PRIMARY KEY (tenant_id, name)
);
After executing this, schema shows "policy_id" created as "timeuuid" instead of "uuid".
Saw a [Similar issue] : creating uuid type field in Cassandra table using CassandraAdminOperations.createTable
But is this applicable to cqlsh ?
Dropped table/keyspace and tried again but no luck.
This is intermittent issue.
Getting following exception:
Exception while loading CQL script:
com.datastax.driver.core.exceptions.InvalidQueryException: Type error: cannot assign result of function system.uuid (type uuid) to id (type timeuuid)
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:43)
Also seen following in cassandra logs:
org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found e686a660-8994-11e9-984c-2767f9f5fd28; expected e5d72c80-8994-11e9-b706-831d59206120)
at org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:808) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:770) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.config.Schema.updateTable(Schema.java:621) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1430) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1386) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1336) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:91) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53) [apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) [apache-cassandra-3.11.0.jar:3.11.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
policy_id should have datatype "uuid" but it has datatype "timeuuid".
Any additional keyword required in cql statement to create table?

Solution for "java.lang.IllegalArgumentException: Field "null" does not exist" while using pivot in Dataframes

I used following expression for converting rows to columns in dataframes using Scala:
val df = Seq(
("ID-1", "First Name", "Jolly"),
("ID-1", "Middle Name", "Jr"),
("ID-1", "Last Name", "Hudson"),
("ID-2", "First Name", "Kathy"),
("ID-2", "Last Name", "Oliver"),
("ID-3", "Last Name", "Short"),
("ID-3", "Middle Name", "M"),
("ID-4", "First Name", "Denver")
).toDF("ID", "Title", "Values")
df.filter($"Title" isin ("First Name", "Last Name", "Middle Name")).
groupBy("ID").pivot("Title").agg(first($"Values")).
select( $"ID", $"First Name", $"Last Name", $"Middle Name").
show(false)
// +----+----------+---------+-----------+
// |ID |First Name|Last Name|Middle Name|
// +----+----------+---------+-----------+
// |ID-1|Jolly |Hudson |Jr |
// |ID-3|null |Short |M |
// |ID-4|Denver |null |null |
// |ID-2|Kathy |Oliver |null |
// +----+----------+---------+-----------+
Output is as expected but ended up with an exception as follows :
java.lang.IllegalArgumentException: Field "null" does not exist
Please help in understanding the reason for getting this exception after getting the expected output and solution to resolve this.
Following is the error logs:
2018-09-12 12:09:54 [Executor task launch worker-1] ERROR o.a.s.e.Executor - Exception in task 15.0 in stage 69.0 (TID 4453)
java.lang.IllegalArgumentException: Field "null" does not exist.
at org.apache.spark.sql.types.StructType$$anonfun$fieldIndex$1.apply(StructType.scala:233)
at org.apache.spark.sql.types.StructType$$anonfun$fieldIndex$1.apply(StructType.scala:233)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:58)
at org.apache.spark.sql.types.StructType.fieldIndex(StructType.scala:232)
at org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema.fieldIndex(rows.scala:213)
at gbam.refdata.dataquality.utils.DataQualityRule$class.getColumn(DataQualityRule.scala:147)
at gbam.refdata.dataquality_rules2.VendorpartyAddress.getColumn(VendorpartyAddress.scala:27)
at gbam.refdata.dataquality.utils.DataQualityRule$$anonfun$getMissing$1$1.apply(DataQualityRule.scala:153)
at gbam.refdata.dataquality.utils.DataQualityRule$$anonfun$getMissing$1$1.apply(DataQualityRule.scala:153)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at gbam.refdata.dataquality.utils.DataQualityRule$class.getMissing$1(DataQualityRule.scala:152)
at gbam.refdata.dataquality.utils.DataQualityRule$class.getBreaks(DataQualityRule.scala:156)
at gbam.refdata.dataquality_rules2.VendorpartyAddress.getBreaks(VendorpartyAddress.scala:27)
at gbam.refdata.dataquality_rules2.VendorpartyAddress$$anonfun$4.apply(VendorpartyAddress.scala:103)
at gbam.refdata.dataquality_rules2.VendorpartyAddress$$anonfun$4.apply(VendorpartyAddress.scala:103)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

shc-core: NoSuchMethodError org.apache.hadoop.hbase.client.Put.addColumn

I try to use shc-core to save spark dataframe into hbase via spark.
My versions:
hbase: 1.1.2.2.6.4.0-91
spark: 1.6
scala: 2.10
shc: 1.1.1-1.6-s_2.10
hdp: 2.6.4.0-91
Configuration looks like that:
val schema_array = s"""{"type": "array", "items": ["string","null"]}""".stripMargin
def catalog: String = s"""{
|"table":{"namespace":"default", "name":"tblename"},
|"rowkey":"id",
|"columns":{
|"id":{"cf":"rowkey", "col":"id", "type":"string"},
|"col1":{"cf":"data", "col":"col1", "avro":"schema_array"}
|}
|}""".stripMargin
df
.write
.options(Map(
"schema_array"-> schema_array,
HBaseTableCatalog.tableCatalog -> catalog,
HBaseTableCatalog.newTable -> "5"
))
.format("org.apache.spark.sql.execution.datasources.hbase")
.save()
Sometimes it works fine as expected and creates table and saves all the data into hbase. But sometimes just fail with following error:
Lost task 35.0 in stage 9.0 (TID 301, host): java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Put.addColumn([B[B[B)Lorg/apache/hadoop/hbase/client/Put;
at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation$$anonfun$org$apache$spark$sql$execution$datasources$hbase$HBaseRelation$$convertToPut$1$1.apply(HBaseRelation.scala:211)
at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation$$anonfun$org$apache$spark$sql$execution$datasources$hbase$HBaseRelation$$convertToPut$1$1.apply(HBaseRelation.scala:210)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.org$apache$spark$sql$execution$datasources$hbase$HBaseRelation$$convertToPut$1(HBaseRelation.scala:210)
at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation$$anonfun$insert$1.apply(HBaseRelation.scala:219)
at org.apache.spark.sql.execution.datasources.hbase.HBaseRelation$$anonfun$insert$1.apply(HBaseRelation.scala:219)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1112)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1277)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1119)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1091)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Any ideas?
That was actually a class path issue - I've got two different versions of hbase client.

Unable to establish Cassandra connection in R using RJDBC

casscon <- dbConnect(cassdrv, "jdbc:cassandra://localhost:9042/quantum_cassandra")
12:31:02.140 [main] DEBUG c.datastax.driver.jdbc.SessionHolder - Final Properties to Connection: {user=, password=, portNumber=9042, databaseName=quantum_cassandra, serverName=localhost}
12:31:02.140 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [localhost/127.0.0.1:9042]
12:31:02.230 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
12:31:02.232 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
12:31:02.315 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
12:31:02.322 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true] closing connection
12:31:02.323 [New I/O worker #4] DEBUG com.datastax.driver.core.Connection - Not terminating Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true]: there are still pending requests
12:31:02.325 [New I/O worker #4] DEBUG com.datastax.driver.core.Connection - Not terminating Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true]: there are still pending requests
12:31:02.329 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true] has already terminated
12:31:02.331 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] error on localhost/127.0.0.1:9042 connection, no more host to try
com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces
at com.datastax.driver.core.Responses$Error.asException(Responses.java:103) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:140) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:158) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:734) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateAwareChannelUpstreamHandler.handleUpstream(IdleStateAwareChannelUpstreamHandler.java:36) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.9.0.Final.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
12:31:02.337 [main] DEBUG com.datastax.driver.core.Cluster - Shutting down
12:31:02.352 [main] DEBUG o.a.c.cql.jdbc.CassandraDriver - Final Properties to Connection: {user=, password=, portNumber=9042, databaseName=quantum_cassandra, serverName=localhost}
12:31:02.381 [main] DEBUG o.a.c.cql.jdbc.CassandraDriver - Final Properties to Connection: {portNumber=9042, databaseName=quantum_cassandra, serverName=localhost}
Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
java.sql.SQLNonTransientConnectionException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2147483648)!
Could any one please help on this issue .
library(RJDBC)
drv <- JDBC("org.apache.cassandra.cql.jdbc.CassandraDriver",list.files("C:/Program Files/DataStax Community/apache-cassandra/lib",pattern="jar$",full.names=T))
conn <- dbConnect(drv, "jdbc:cassandra://localhost:9042/dbname")
result <- dbGetQuery(conn, "select tablename from columnname")
hope this would work

error during lightweight transaction in Cassandra using java driver?

Here is my method to create table if it does not exists.
session.execute(
"CREATE TABLE simplex.songs (" +
"id uuid PRIMARY KEY," +
"title text," +
"album text," +
"artist text," +
"tags set<text>," +
"data blob" +
") IF NOT EXISTS ;");
session.execute(
"CREATE TABLE simplex.playlists (" +
"id uuid," +
"title text," +
"album text, " +
"artist text," +
"song_id uuid," +
"PRIMARY KEY (id, title, album, artist)" +
") IF NOT EXISTS ;");
}
when I run this, I get the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.SyntaxError: line 1:108 missing EOF at 'IF'
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:147)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:79)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:75)
at com.example.cassandra.simple_client.SimpleClient.createSchema(SimpleClient.java:38)
at com.example.cassandra.simple_client.SimpleClient.main(SimpleClient.java:130)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:108 missing EOF at 'IF'
at com.datastax.driver.core.DefaultResultSetFuture.convertException(DefaultResultSetFuture.java:209)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:110)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:210)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:325)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:557)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Wrong syntax:
CREATE TABLE IF NOT EXISTS keyspace.table(columns ...)

Resources