Invalid index unpickling where object ref is repeated - scala-pickling

I have a case where I'm pickling an object where a ref is repeated inside an object tree. I am getting an invalid index exception when unpickling. Below is a test case.
import scala.pickling._
import json._
object JsonTest extends App {
val obj = StringProp("test")
val pickle = new PropTest(obj, obj).pickle.unpickle[PropTest]
}
case class StringProp(prop: String) {}
class PropTest(val prop1: StringProp, val prop2: StringProp) {}
Thanks in advance for taking a look at this.
Here is the stacktrace of the error:
Exception in thread "main" java.lang.Error: fatal error: invalid index 1 in unpicklee cache of length 1
at scala.pickling.internal.package$.lookupUnpicklee(package.scala:151)
at scala.pickling.json.JSONPickleReader$$anonfun$30.apply(JSONPickleFormat.scala:163)
at scala.pickling.json.JSONPickleReader.readPrimitive(JSONPickleFormat.scala:229)
at scala.pickling.CorePicklersUnpicklers$PrimitivePicklerUnpickler.unpickle(Custom.scala:313)
at com.ft.flexui.modules.pm.JsonTest$ComFtFlexuiModulesPmPropTestUnpickler2$2$.unpickle(JsonTest.scala:135)
at com.ft.flexui.modules.pm.JsonTest$delayedInit$body.apply(JsonTest.scala:135)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at com.ft.flexui.modules.pm.JsonTest$.main(JsonTest.scala:13)
at com.ft.flexui.modules.pm.JsonTest.main(JsonTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)

See scala/pickling#240 for the details.
So it looks like it attempting to create a pointer to the object as a form of compression but gets confused.
And Heather confirms:
somehow something is misbehaving when we generate this code to analyze object graphs at runtime.
The workaround for now is:
import scala.pickling.shareNothing._

Related

Is it possible that creating broadcast variables within spark streaming transformation function

I tried to create a recoverable spark streaming job with some arguments got from database. But then I got a problem: it always gives me a serialization error when I try to restart a job from checkpoint.
18/10/18 09:54:33 ERROR Executor: Exception in task 1.0 in stage 56.0 (TID 132) java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to
scala.collection.MapLike at
com.ptnj.streaming.alertJob.InputDataParser$.kafka_stream_handle(InputDataParser.scala:37)
at
com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1.apply(InstanceAlertJob.scala:38)
at
com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1.apply(InstanceAlertJob.scala:38)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410) at
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:463) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462) at
scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I followed the advice by maxime G in this existing SO question, and it seems to help.
But now there is another exception. And because of that issue,I have to
create broadcast variables while stream transforming, like
val kafka_data_streaming = stream.map(x => DstreamHandle.kafka_stream_handle(url, x.value(), sc))
So it going to be I have to put sparkcontext as a parameter into
transformation function, then it occurs:
Exception in thread "main" org.apache.spark.SparkException: Task not serializable at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2094) at
org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:546)
at
org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:546)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:701)
at
org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:264)
at org.apache.spark.streaming.dstream.DStream.map(DStream.scala:545)
at
com.ptnj.streaming.alertJob.InstanceAlertJob$.streaming_main(InstanceAlertJob.scala:38)
at com.ptnj.streaming.AlarmMain$.create_ssc(AlarmMain.scala:36) at
com.ptnj.streaming.AlarmMain$.main(AlarmMain.scala:14) at
com.ptnj.streaming.AlarmMain.main(AlarmMain.scala) Caused by:
java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
- object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext#5fb7183b)
- field (class: com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1, name: sc$1,
type: class org.apache.spark.SparkContext)
- object (class com.ptnj.streaming.alertJob.InstanceAlertJob$$anonfun$1, )
at
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 14 more
And I have never seen this situation before. Each example shows that broadcast variables would be create in output operation function but not transformation function, so is that possible?

Groovy/Spock BUG! when passing variable into closure

I have a Spock test of this form:
def "test"() {
when:
List result = repo.getSomeData(devices)
then:
result.size() == 1
interaction {
containsRequiredKeys(result[0].keySet())
}
where:
devices | _
id1 | "devices provided"
null | "no devices provided"
}
When running this I get the following error when compiling:
Error:Groovyc: BUG! exception in phase 'class generation' in source unit 'D:\w\traffic-data-web-service\src\integrationTest\groovy\com\company\project\db\Test.groovy' tried to get a variable with the name result as stack variable, but a variable with this name was not created
at org.codehaus.groovy.classgen.asm.CompileStack.getVariable(CompileStack.java:288)
at org.codehaus.groovy.classgen.asm.ClosureWriter.loadReference(ClosureWriter.java:131)
at org.codehaus.groovy.classgen.asm.ClosureWriter.writeClosure(ClosureWriter.java:106)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitClosureExpression(AsmClassGenerator.java:657)
at org.codehaus.groovy.ast.expr.ClosureExpression.visit(ClosureExpression.java:45)
at org.codehaus.groovy.classgen.asm.CallSiteWriter.makeCallSite(CallSiteWriter.java:303)
at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCachedCall(InvocationWriter.java:307)
at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:392)
at org.codehaus.groovy.classgen.asm.InvocationWriter.makeCall(InvocationWriter.java:104)
at org.codehaus.groovy.classgen.asm.InvocationWriter.makeInvokeMethodCall(InvocationWriter.java:88)
at org.codehaus.groovy.classgen.asm.InvocationWriter.writeInvokeMethod(InvocationWriter.java:459)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitMethodCallExpression(AsmClassGenerator.java:767)
at org.codehaus.groovy.ast.expr.MethodCallExpression.visit(MethodCallExpression.java:66)
at org.codehaus.groovy.classgen.asm.StatementWriter.writeExpressionStatement(StatementWriter.java:607)
at org.codehaus.groovy.classgen.asm.OptimizingStatementWriter.writeExpressionStatement(OptimizingStatementWriter.java:357)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitExpressionStatement(AsmClassGenerator.java:620)
at org.codehaus.groovy.ast.stmt.ExpressionStatement.visit(ExpressionStatement.java:42)
at org.codehaus.groovy.classgen.asm.StatementWriter.writeBlockStatement(StatementWriter.java:84)
at org.codehaus.groovy.classgen.asm.OptimizingStatementWriter.writeBlockStatement(OptimizingStatementWriter.java:158)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitBlockStatement(AsmClassGenerator.java:566)
at org.codehaus.groovy.ast.stmt.BlockStatement.visit(BlockStatement.java:71)
at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitClassCodeContainer(ClassCodeVisitorSupport.java:104)
at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitConstructorOrMethod(ClassCodeVisitorSupport.java:115)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitStdMethod(AsmClassGenerator.java:430)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitConstructorOrMethod(AsmClassGenerator.java:387)
at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitMethod(ClassCodeVisitorSupport.java:126)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitMethod(AsmClassGenerator.java:507)
at org.codehaus.groovy.ast.ClassNode.visitContents(ClassNode.java:1086)
at org.codehaus.groovy.ast.ClassCodeVisitorSupport.visitClass(ClassCodeVisitorSupport.java:53)
at org.codehaus.groovy.classgen.AsmClassGenerator.visitClass(AsmClassGenerator.java:233)
at org.codehaus.groovy.control.CompilationUnit$16.call(CompilationUnit.java:813)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1055)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
at org.jetbrains.groovy.compiler.rt.GroovyCompilerWrapper.compile(GroovyCompilerWrapper.java:62)
at org.jetbrains.groovy.compiler.rt.DependentGroovycRunner.runGroovyc(DependentGroovycRunner.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.jetbrains.groovy.compiler.rt.GroovycRunner.intMain2(GroovycRunner.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.jetbrains.jps.incremental.groovy.InProcessGroovyc.runGroovycInThisProcess(InProcessGroovyc.java:156)
at org.jetbrains.jps.incremental.groovy.InProcessGroovyc.access$000(InProcessGroovyc.java:51)
at org.jetbrains.jps.incremental.groovy.InProcessGroovyc$1.call(InProcessGroovyc.java:85)
at org.jetbrains.jps.incremental.groovy.InProcessGroovyc$1.call(InProcessGroovyc.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Am I doing something wrong? Or is it an actual bug with Spock or Groovy?
If it helps, I've got the following in my build.gradle:
compile "org.codehaus.groovy:groovy-all:2.4.4"
testCompile "org.spockframework:spock-core:1.0-groovy-2.4"
testCompile "org.spockframework:spock-spring:1.0-groovy-2.4"
Your interaction use the variable result. Spock can't match the interaction during the test, because result is initialized after the invocation of repo.getSomedata
under the hood, interaction (mocks) are moved (initialized) before when clauses (where result is undefined)

SparkSQL (Spark 1.3) UDF for operations on Date

I have a data frame with two string columns having information about date (i.e. "2014-01-01"). I would like to do operations on such columns like cast to date format, and subtract dates. I tried by defining UDF using what I found over internet, for example the following:
import org.apache.spark.sql.types.DateType
import org.apache.spark.sql.functions._
import org.joda.time.DateTime
import org.joda.time.format.DateTimeFormat
val d = DateTimeFormat.forPattern("yyyy-mm-dd'T'kk:mm:ssZ")
val dtFunc: (String => Date) = (arg1: String) => DateTime.parse(arg1, d).toDate
val dtFunc2 = udf(dtFunc)
val x = df.withColumn("dt", dtFunc2(col("dt_string")))
But when I use that, I got the following error:
scala.MatchError: java.util.Date (of class scala.reflect.internal.Types$TypeRef$$anon$6)
at org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:112)
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:30)
at org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:107)
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:30)
at org.apache.spark.sql.functions$.udf(functions.scala:402)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:34)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC$$iwC.<init>(<console>:53)
at $iwC$$iwC.<init>(<console>:55)
at $iwC.<init>(<console>:57)
at <init>(<console>:59)
at .<init>(<console>:63)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Can you help me with this please? thanks!
SparkSQL represents timestamps and dates using java.sql.Timestamp and java.sql.Date respectively. java.util.Date won't work here. You can simply extract milliseconds and pass to java.sql.Date constructor.
In practice I would consider using HiveContext and Hive UDFs. For example you can use unix_timestamp with specified pattern (using Simple Date Format) to convert string to seconds
df.selectExpr("*",
"""unix_timestamp(dt_string, "yyyy-MM-dd'T'kk:mm:ss")""")timestamp)""")
and use standard type casting to get date or timestamp.

How to convert clob to string using Groovy?

i want to select data from a table with datatype clob.
so the resut is like oracle.sql.CLOB#12fe54d
I want to display the data as string as they appear in the table.
in groovy i tried this:
rowTest = sql.firstRow("select name from table where id=10")
clobTest = (oracle.sql.CLOB)rowTest[0]
byte_stream_test = clobTest.getBinaryStream()
if( byte_stream_test == null ) { println "Test: Received null stream!" }
byte[] byte_array_test = new byte[10]
int bytes_read_test = byte_stream_test.read(byte_array_test)
print "Read $bytes_read_test bytes from the clob!"
sql.connection.close()
I have the following error:
---- A working test of writing and then reading a blob into an Oracle DB ---
Exception in thread "main" groovy.lang.MissingMethodException: No signature of method: oracle.sql.CLOB.getBinaryStream() is applicable for argument types: () values: []
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:56)
at org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:46)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at Test2.run(Test2.groovy:10)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1207)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1016)
at org.codehaus.groovy.runtime.InvokerHelper.invokePogoMethod(InvokerHelper.java:901)
at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:884)
at org.codehaus.groovy.runtime.InvokerHelper.runScript(InvokerHelper.java:406)
at org.codehaus.groovy.runtime.InvokerHelper$runScript.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at Test2.main(Test2.groovy)
Can you help me to resolve this problem, thank you
Try the following code:
rowTest = sql.firstRow("select name from table where id=10")
clobTest = (oracle.sql.CLOB)rowTest[0]
bodyText = clobTest?.asciiStream.text
println bodyText
Use characterStream instead of asciiStream to support Unicode characters:
String unicode = "äö"
Clob clob = new ClobImpl(unicode)
String strClob = clob?.characterStream?.text
assertEquals(strClob, unicode)

'AST not available' exception while filtering DataSet

I have following groovy program -
def people = sql.dataSet('players')
def d = people.findAll { true }
print d.rows()
I get an exception at the print statement.
groovy.lang.GroovyRuntimeException: DataSet unable to evaluate expression. AST not available for closure: in.kshitiz.pgnimport.dao.PlayerDao$_get_closure1. Is the source code on the classpath?
at groovy.sql.DataSet.visit(DataSet.java:300)
at groovy.sql.DataSet.getSqlWhereVisitor(DataSet.java:283)
at groovy.sql.DataSet.getSqlWhere(DataSet.java:230)
at groovy.sql.DataSet.getSql(DataSet.java:257)
at groovy.sql.DataSet.rows(DataSet.java:332)
at groovy.sql.DataSet$rows.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at in.kshitiz.pgnimport.dao.PlayerDao.get(PlayerDao.groovy:13)
at in.kshitiz.pgnimport.dao.PlayerDao$get.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at in.kshitiz.pgnimport.dao.PlayerDaoTest.testGet(PlayerDaoTest.groovy:17)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
The exception is not very helpful. What does it mean?
Simply add your groovy DAO source to the classpath to fix the problem.
In this mailing list, Guillaume Laforge says:
There's a twist in the usage of the Sql class.
It needs your Groovy class source code available -- like when running groovy myScript.groovy.
So for instance, when a class is pre-compiled, the "class node" won't be available anymore, as it's not stored in the bytecode.
That may be the issue you're facing here.
The class node is needed for the findall {} part, where we're visiting the AST for creating an adhoc query.
From Groovy in Action (Chapter 10, page 343, para 4) -
DataSet implementation fetches Groovy’s internal representation of
the closure’s code. This internal representation is called the
Abstract Syntax Tree (AST) and was generated by the Groovy parser.
By walking over the AST (with a Visitor pattern), the DataSet
implementation emits the SQL equivalent of each AST node.

Resources