Here is my method to create table if it does not exists.
session.execute(
"CREATE TABLE simplex.songs (" +
"id uuid PRIMARY KEY," +
"title text," +
"album text," +
"artist text," +
"tags set<text>," +
"data blob" +
") IF NOT EXISTS ;");
session.execute(
"CREATE TABLE simplex.playlists (" +
"id uuid," +
"title text," +
"album text, " +
"artist text," +
"song_id uuid," +
"PRIMARY KEY (id, title, album, artist)" +
") IF NOT EXISTS ;");
}
when I run this, I get the following error:
Exception in thread "main" com.datastax.driver.core.exceptions.SyntaxError: line 1:108 missing EOF at 'IF'
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:147)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:79)
at com.datastax.driver.core.SessionManager.execute(SessionManager.java:75)
at com.example.cassandra.simple_client.SimpleClient.createSchema(SimpleClient.java:38)
at com.example.cassandra.simple_client.SimpleClient.main(SimpleClient.java:130)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:108 missing EOF at 'IF'
at com.datastax.driver.core.DefaultResultSetFuture.convertException(DefaultResultSetFuture.java:209)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:110)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:210)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:325)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:557)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Wrong syntax:
CREATE TABLE IF NOT EXISTS keyspace.table(columns ...)
Related
Cassandra 3.11.0
running inside Ubuntu 16.04.4, JRE-8
I am trying to create a table with uuid field as follows :
CREATE TABLE IF NOT EXISTS policy (
tenant_id text,
policy_id uuid,
name text,
enabled boolean,
creation_time bigint,
PRIMARY KEY (tenant_id, name)
);
After executing this, schema shows "policy_id" created as "timeuuid" instead of "uuid".
Saw a [Similar issue] : creating uuid type field in Cassandra table using CassandraAdminOperations.createTable
But is this applicable to cqlsh ?
Dropped table/keyspace and tried again but no luck.
This is intermittent issue.
Getting following exception:
Exception while loading CQL script:
com.datastax.driver.core.exceptions.InvalidQueryException: Type error: cannot assign result of function system.uuid (type uuid) to id (type timeuuid)
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:43)
Also seen following in cassandra logs:
org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found e686a660-8994-11e9-984c-2767f9f5fd28; expected e5d72c80-8994-11e9-b706-831d59206120)
at org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:808) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:770) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.config.Schema.updateTable(Schema.java:621) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1430) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1386) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1336) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:91) ~[apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53) [apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) [apache-cassandra-3.11.0.jar:3.11.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
policy_id should have datatype "uuid" but it has datatype "timeuuid".
Any additional keyword required in cql statement to create table?
I used following expression for converting rows to columns in dataframes using Scala:
val df = Seq(
("ID-1", "First Name", "Jolly"),
("ID-1", "Middle Name", "Jr"),
("ID-1", "Last Name", "Hudson"),
("ID-2", "First Name", "Kathy"),
("ID-2", "Last Name", "Oliver"),
("ID-3", "Last Name", "Short"),
("ID-3", "Middle Name", "M"),
("ID-4", "First Name", "Denver")
).toDF("ID", "Title", "Values")
df.filter($"Title" isin ("First Name", "Last Name", "Middle Name")).
groupBy("ID").pivot("Title").agg(first($"Values")).
select( $"ID", $"First Name", $"Last Name", $"Middle Name").
show(false)
// +----+----------+---------+-----------+
// |ID |First Name|Last Name|Middle Name|
// +----+----------+---------+-----------+
// |ID-1|Jolly |Hudson |Jr |
// |ID-3|null |Short |M |
// |ID-4|Denver |null |null |
// |ID-2|Kathy |Oliver |null |
// +----+----------+---------+-----------+
Output is as expected but ended up with an exception as follows :
java.lang.IllegalArgumentException: Field "null" does not exist
Please help in understanding the reason for getting this exception after getting the expected output and solution to resolve this.
Following is the error logs:
2018-09-12 12:09:54 [Executor task launch worker-1] ERROR o.a.s.e.Executor - Exception in task 15.0 in stage 69.0 (TID 4453)
java.lang.IllegalArgumentException: Field "null" does not exist.
at org.apache.spark.sql.types.StructType$$anonfun$fieldIndex$1.apply(StructType.scala:233)
at org.apache.spark.sql.types.StructType$$anonfun$fieldIndex$1.apply(StructType.scala:233)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:58)
at org.apache.spark.sql.types.StructType.fieldIndex(StructType.scala:232)
at org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema.fieldIndex(rows.scala:213)
at gbam.refdata.dataquality.utils.DataQualityRule$class.getColumn(DataQualityRule.scala:147)
at gbam.refdata.dataquality_rules2.VendorpartyAddress.getColumn(VendorpartyAddress.scala:27)
at gbam.refdata.dataquality.utils.DataQualityRule$$anonfun$getMissing$1$1.apply(DataQualityRule.scala:153)
at gbam.refdata.dataquality.utils.DataQualityRule$$anonfun$getMissing$1$1.apply(DataQualityRule.scala:153)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at gbam.refdata.dataquality.utils.DataQualityRule$class.getMissing$1(DataQualityRule.scala:152)
at gbam.refdata.dataquality.utils.DataQualityRule$class.getBreaks(DataQualityRule.scala:156)
at gbam.refdata.dataquality_rules2.VendorpartyAddress.getBreaks(VendorpartyAddress.scala:27)
at gbam.refdata.dataquality_rules2.VendorpartyAddress$$anonfun$4.apply(VendorpartyAddress.scala:103)
at gbam.refdata.dataquality_rules2.VendorpartyAddress$$anonfun$4.apply(VendorpartyAddress.scala:103)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1869)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
casscon <- dbConnect(cassdrv, "jdbc:cassandra://localhost:9042/quantum_cassandra")
12:31:02.140 [main] DEBUG c.datastax.driver.jdbc.SessionHolder - Final Properties to Connection: {user=, password=, portNumber=9042, databaseName=quantum_cassandra, serverName=localhost}
12:31:02.140 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [localhost/127.0.0.1:9042]
12:31:02.230 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
12:31:02.232 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
12:31:02.315 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
12:31:02.322 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true] closing connection
12:31:02.323 [New I/O worker #4] DEBUG com.datastax.driver.core.Connection - Not terminating Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true]: there are still pending requests
12:31:02.325 [New I/O worker #4] DEBUG com.datastax.driver.core.Connection - Not terminating Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true]: there are still pending requests
12:31:02.329 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true] has already terminated
12:31:02.331 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] error on localhost/127.0.0.1:9042 connection, no more host to try
com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces
at com.datastax.driver.core.Responses$Error.asException(Responses.java:103) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:140) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:158) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:734) ~[cassandra-driver-core-2.1.6-SNAPSHOT.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateAwareChannelUpstreamHandler.handleUpstream(IdleStateAwareChannelUpstreamHandler.java:36) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.9.0.Final.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.9.0.Final.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
12:31:02.337 [main] DEBUG com.datastax.driver.core.Cluster - Shutting down
12:31:02.352 [main] DEBUG o.a.c.cql.jdbc.CassandraDriver - Final Properties to Connection: {user=, password=, portNumber=9042, databaseName=quantum_cassandra, serverName=localhost}
12:31:02.381 [main] DEBUG o.a.c.cql.jdbc.CassandraDriver - Final Properties to Connection: {portNumber=9042, databaseName=quantum_cassandra, serverName=localhost}
Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
java.sql.SQLNonTransientConnectionException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2147483648)!
Could any one please help on this issue .
library(RJDBC)
drv <- JDBC("org.apache.cassandra.cql.jdbc.CassandraDriver",list.files("C:/Program Files/DataStax Community/apache-cassandra/lib",pattern="jar$",full.names=T))
conn <- dbConnect(drv, "jdbc:cassandra://localhost:9042/dbname")
result <- dbGetQuery(conn, "select tablename from columnname")
hope this would work
I am running Java datastax driver to execute cql query to create schema and table(s) together in one query. And receiving EOF exception as below.
session.execute("CREATE KEYSPACE testkeyspace WITH REPLICATION = { 'class' : 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3' } AND DURABLE_WRITES = true;"+
"CREATE TABLE testkeyspace.users (" +
" name text," +
" birth_year int," +
" gender text," +
" PRIMARY KEY (name)" +
") WITH read_repair_chance = 0.0" +
" AND dclocal_read_repair_chance = 0.1" +
" AND gc_grace_seconds = 864000" +
" AND bloom_filter_fp_chance = 0.01" +
" AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' }" +
" AND comment = ''" +
" AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' }" +
" AND compression = { 'sstable_compression' : 'org.apache.cassandra.io.compress.LZ4Compressor' }" +
" AND default_time_to_live = 0" +
" AND speculative_retry = '99.0PERCENTILE'" +
" AND min_index_interval = 128" +
" AND max_index_interval = 2048;");
Exception Trace
Exception in thread "main" com.datastax.driver.core.exceptions.SyntaxError: line 1:159 missing EOF at 'CREATE' (...} AND DURABLE_WRITES = true;[CREATE] TABLE...)
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
at com.example.helloworld.HelloWorld.main(HelloWorld.java:58)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 1:159 missing EOF at 'CREATE' (...} AND DURABLE_WRITES = true;[CREATE] TABLE...)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:132)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
You need to separate the two statements into distinct calls to session.execute.
The way the native protocol is designed, each request execution corresponds to one statement. There are batch execution requests that can contain multiple statements, but I'm not sure it makes a lot of sense with schema DDL. The driver actually has some schema agreement polling that happens after each schema change, to make sure references to the new element succeed on any host after the request completes.
I have the following code which is trying to join 2 cassandra tables in spark.
val imageKeywords = sc.cassandraTable[ImageMetadata]("images", "metadata")
val imageAndPageKeywords = imageKeywords
.joinWithCassandraTable[PagesMetadata]("pages2", "metadata")
.on(SomeColumns("tid", "url" as "pu"))
The case classes I am using to map data are as below
case class ImageMetadata(tid: String, iu: String, pu: Option[String],
mk: List[String], fk: List[String], ak: List[String], ipk: List[String], pk: List[String], ik: List[String], ck: List[String])
case class PagesMetadata(tid: String, url: String, pk: List[String], uk: List[String], hk: List[String], ok: List[String], tc: List[String])
I get an error when I try to do some operations like below
imageAndPageKeywords.collect.toList.sortBy(_._1.tid).take(10).foreach(println)
error stacktrace -
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid null value for partition key part url
at com.datastax.driver.core.Responses$Error.asException(Responses.java:103)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:140)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:293)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:455)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:734)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.handler.timeout.IdleStateAwareChannelUpstreamHandler.handleUpstream(IdleStateAwareChannelUpstreamHandler.java:36)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
Simple, the exception tells you that it cannot perform the join because the column used to join ImageMetadata with PagesMetadata are null.
In your case, some url (pu) values in ImageMetadata are null.
What is strange is that you define the PagesMetadata with url nullable (Option[String]) and it seems that it is part of the table's primary key
One solution to make it work would be:
val imageAndPageKeywords = imageKeywords
.filter(im -> im.pu.isDefined)
.joinWithCassandraTable[PagesMetadata]("pages2", "metadata")
.on(SomeColumns("tid", "url" as "pu"))