Issue
I have a scenario where I need to convert a scala Map to a case class object and with help of the following references I was able to achieve it locally (scala version 2.12.13):
Scala: convert map to case class
Convert a Map into Scala object
But when I tried running the same block of code in Databricks notebook it throws an error:
IllegalArgumentException: Cannot construct instance of '$line23851bc084ae4df7a16bf9c475868d9265.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$Test' (although at least one Creator exists): can only instantiate non-static inner class by using default, no-argument constructor at [Source: UNKNOWN; line: -1, column: -1]
Cluster configuration: Databricks runtime 8.2(Includes Spark 3.1.1, Scala 2.12). Please refer to the screenshot for the complete code.
Workaround (not advisable):
def workaround(map: Map[String, Any]): Test = {
Test(
map("k1").asInstanceOf[Int],
map("k2").asInstanceOf[String],
map("k3").asInstanceOf[String],
)
}
val result = workaround(myMap)
Any thoughts on how to resolve this issue?
This smells to me like one of two possibilities.
First, we should double check that there is no version mis-matching between something in your local runtime environment and the databricks runtime environment. You can check this page for a list of all library versions included in DBR 8.2. In particular, I would check your local environment to make sure you are running the same version of jackson (2.10.0).
Second, this could be an interaction between how Databricks implements their notebook and a limitation of jackson. Each command of a databricks notebooks is wrapped in a package object that is given a random name. For example, I can tell from the exception that the package object which holds your Test class definition was called $line23851bc084ae4df7a16bf9c475868d9265.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw. (I say that it was named that because each time you re-run the command it will generate a new package object). This implies that all classes (and other kinds of type definitions) are actually path dependent types when put inside a notebook. More specifically for this scenario, your Test class is an inner-class to the package object. based on the error message and some brief reading of docs, I suspect that Jackson cannot serialize path dependent types.
Related
I have used this tool: https://github.com/etiennestuder/gradle-jooq-plugin
from jOOQ's official website to generate code from my database.
Yet if I set
directory = 'src/main/java'
when I run "gradle build", I get all these compile errors like:
database/information_schema/InformationSchema.java:218: error: no suitable constructor found for SchemaImpl(String,<null>)
super("INFORMATION_SCHEMA", null);
^
constructor SchemaImpl.SchemaImpl(Name) is not applicable
(actual and formal argument lists differ in length)
constructor SchemaImpl.SchemaImpl(String) is not applicable
(actual and formal argument lists differ in length)
Any fix for this?
Note that I wanted to put the generated code into the src folder because I want to use them in my code. I've heard to put them in the target or build folder instead, but I'm not sure how do you access those classes from target or build folder?
Thanks!
I was on 3.7. Now I switch to 3.9, everything turns out to be fine...
I've written a short blog post about this. Starting with jOOQ 3.16 and #12601, there's an additional compilation error in case users use:
An older version of org.jooq:jooq (the runtime library)
A newer version of org.jooq:jooq-codegen (the code generation library)
In general, the runtime library version >= codegen library version. The new compilation error might look like this:
[ERROR] …/DefaultCatalog.java:[53,73] cannot find symbol
[ERROR] symbol: variable VERSION_3_17
[ERROR] location: class org.jooq.Constants
I am trying to use Spark LuceneRDD with Record Linkage concept from the link.
I did all the steps mentioned in the link but I am getting the error
Error: No implicit view available for String => org.apache.lucene.document.Document
I tried by adding lucene jar for spark shell but I am still getting the same error.
Any help is appreciated.
Adding Lucene jars will not help you here. The problem is, that some functionality is using implicit features of Scala. What it means, it should be some mapping function that will transform String into Lucene document.
When I looked over github, I found one implicit thing that will do the conversion - https://github.com/zouzias/spark-lucenerdd/blob/master/src/main/scala/org/zouzias/spark/lucenerdd/package.scala
So, you just need to add import to your code, something like this:
import org.zouzias.spark.lucenerdd._
or even more precisely, if you just need only 1 conversation (could be not your case however)
import org.zouzias.spark.lucenerdd.stringToDocument
I have recently upgraded to the latest version of Stanford CoreNLP. The code I previously used to get the subject or object in a sentence was
System.out.println("subject: "+dependencies.getChildWithReln(dependencies.getFirstRoot(), EnglishGrammaticalRelations.NOMINAL_SUBJECT));
but this now returns null.
I have tried creating a relation with
GrammaticalRelation subjreln =
edu.stanford.nlp.trees.GrammaticalRelation.valueOf("nsubj");
without success. If I extract a relation using code like
GrammaticalRelation target = (dependencies.childRelns(dependencies.getFirstRoot())).iterator().next();
Then run the same request,
System.out.println("target: "+dependencies.getChildWithReln(dependencies.getFirstRoot(), target));
then I get the desired result, confirming that the parsing worked fine (I also know this from printing out the full dependencies).
I suspect my problem has to do with the switch to universal dependencies, but I don't know how to create the GrammaticalRelation from scratch in a way that will match what the dependency parser found.
Since version 3.5.2 the default dependency representation in CoreNLP is Universal Dependencies. This new representation is implemented in a different class (UniversalEnglishGrammaticalRelations) so the GrammaticalStructure objects are now defined somewhere else.
All you have to do to use the new version is to replace EnglishGrammaticalRelations with UniversalGrammaticalRelations:
System.out.println("subject: "+dependencies.getChildWithReln(dependencies.getFirstRoot(), UniversalEnglishGrammaticalRelations.NOMINAL_SUBJECT));
Note, however, that some relations in the new representation are different and might no longer exist (nsubj still does). We are currently compiling migration guidelines from the old representation to the new Universal Dependencies relations. It is still incomplete but it already contains all relation names and their class names in CoreNLP.
If I have a class named Character.groovy (with no explicit constructors) and try to instantiate it, I get a message that says:
groovy.lang.GroovyRuntimeException: Could not find matching constructor for: java.lang.Character()
But if I change the class name to Characterr.groovy, then I am able to instantiate an object and use it as expected. So are there reserved words I can't use in Groovy classes? If so, why is Character one of them?
It's not a reserved class name, but there already is a class with that name (java.lang.Character) imported as the package java.lang gets imported automatically in java.
This can happen all the time, especially if you are a java developer and not used to get e.g. java.io package etc. autoimported for you by groovy (e.g. File) (see also What packages does 1) Java and 2) Groovy automatically import?)
There are three ways around it:
the java way: address your class with the full name, that is package and class name. e.g. org.myrpg.Character.
the groovy way: import the class with a new name. e.g. import org.myrgp.Character as RPGChar and use then RPGChar instead.
the zen way: more often than not, it is not worth the hastle and easier to just rename your class. if you tripped over this once, then the chance very high you will trip over this again and only things like #CompileStatic or an IDE may make notice you this at compile time or while writing it.
http://groovy.codehaus.org/Reserved+Words
Those are the reserved keywords
http://docs.oracle.com/javase/8/docs/api/java/lang/Character.html
Character I believe is a object wrapper class in java which is why you can't use it. You can't use any name of java classes that are autoincluded in java
I am trying to use a different location for external jar in soapUI. I updated the soapUI batch file by adding the below line.
set JAVA_OPTS=%JAVA_OPTS% -Dsoapui.ext.libraries="C:\Program Files\Groovy\Groovy-2.1.6\lib"
Now when i open soapUI and try to create an activexobject using scriptom(see below)
import org.codehaus.groovy.scriptom.*
def tdc = new ActiveXObject ('TDApiOle80.TDConnection')
I get the following error, the error seems weird because i know i am using groovy 2.1.6 as you can see from the path.
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: Could not instantiate global transform class org.spockframework.compiler.SpockTransform specified at jar:file:/C:/Program%20Files/Groovy/Groovy-2.1.6/lib/spock-core-0.7-groovy-2.0-20120930.020057-22.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation because of exception org.spockframework.util.IncompatibleGroovyVersionException: The Spock compiler plugin cannot execute because Spock 0.7.0-groovy-2.0 is not compatible with Groovy 1.8.0. For more information, see http://versioninfo.spockframework.org Spock location: file:/C:/Program%20Files/Groovy/Groovy-2.1.6/lib/spock-core-0.7-groovy-2.0-20120930.020057-22.jar Groovy location: file:/C:/Program%20Files/SmartBear/soapUI-Pro-4.5.2/lib/groovy-all-1.8.0.jar 1 error
Does anyone know why i am getting this error and what i can do to fix it?
I believe SoapUI (at least 4.5.1) is bundled with Groovy 1.8.0
At least is was back in May this year (2013)
You could try the suggestion posted on the page to upgrade, or I guess you're stuck with 1.8.0 functionality (and the non-2.0 spock dependency)