Unpickler for class with tuple - scala-pickling

I recently came across this framework and it seems really promising for what I need. I am testing some simple examples out and I'm curious why I can pickle my object but it can't find an unpickler. Here is my example:
import scala.pickling._
import json._
object JsonTest extends App {
val simplePickled = new Simple(("test", 3)).pickle
val unpickled = simplePickled.unpickle[Simple]
}
class Simple(val x: (String, Int)) {}
Cannot generate an unpickler for com.ft.Simple
Thanks in advance for any help.

This behavior is actually a regression introduced 3 days ago. We actually just resolved this and have pushed a fix less than 1-2 hours ago.
The code you posted above now works again:
scala> :paste
// Entering paste mode (ctrl-D to finish)
import scala.pickling._
import json._
object JsonTest extends App {
val simplePickled = new Simple(("test", 3)).pickle
val unpickled = simplePickled.unpickle[Simple]
}
class Simple(val x: (String, Int)) {}
// Exiting paste mode, now interpreting.
import scala.pickling._
import json._
defined module JsonTest
defined class Simple
I've also added your code snippet here as a test case in our test suite
If you're using the artifacts we publish on sonatype, you'll have to wait until the next artifact is published (tomorrow), or if you want the fix incorporated right away, you can just checkout and package scala/pickling with sbt and use the jar that sbt builds (sbt should print where it put the jar).

Related

How imported class work in Groovy Closure when calling it?

I have a Groovy Closure which uses some imported class like:
import com.XXX
Closure test = { a -> XXX(a) }
test('some str')
How the imported class XXX work inside closure test since I never defined the XXX in test.
In this case delegate and owner are point to current script and still not figure out how import work.
Thanks
This example works, maybe look at how you specify the import statement's package structure:
assert org.apache.commons.lang3.text.WordUtils.capitalizeFully('groovy closure') == 'Groovy Closure'
import org.apache.commons.lang3.text.WordUtils
Closure test = { a -> WordUtils.capitalizeFully(a) }
assert test('groovy closure') == 'Groovy Closure'
I finally figured out this is a Java related question.
"import" key word in Java is kind of a syntax sugar which let you claim a class without full path name. And when the class file be compiled, class name will be replaced by full path of import by compiler.
So in my case, XXX will be compiled to com.XXX inside Closure(does not matter it is a Java or Groovy class), and it will work in any class which been called.

'spaceship' class is not defined (even though I've imported it?)

I run the main module, which should work correctly. But an error gets returned. 'spaceship' is not defined when I define 's=spaceship(parameters)' why is this I don't get it. I'm using zelle graphics for python. thank you
Functions from main module:
spaceshipGame file
from graphics import *
from spaceshipClass import *
def main():
window=createGraphicsWindow()
runGame(window)
def createGraphicsWindow():
win=GraphWin("Spaceship game",800,800)
return win
def createSpaceship(window,p1,p2,p3,speed,colour):
s=spaceship(window,p1,p2,p3,speed,colour)
return s
def runGame(window):
player=createSpaceship(window,Point(500,500),Point(500,470),Point(520,485),0.5,"red")
player.draw(window)
main()
spaceshipClass file
from spaceshipGame import *
from graphics import *
class spaceship:
def __init__(self,window,p1,p2,p3,speed,colour):
self.p1=p1
self.p2=p2
self.p3=p3
self.speed=speed
self.colour=colour
self.window=window
Never mind, I see the problem. Consult this example for more information:
Simple cross import in python
But the problem is the way you are cross importing, so delete from spaceshipGame import * from spaceshipClass or vise-versa (i.e. delete from spaceshipClass import * from spaceshipGame). You can import individually if you need to like in the example I provided.
There are also many other ways around it if you read the example. One of the easiest would be just merging them in the same file if they need to share a lot of methods.

sqlAlchemy with multiple relationship and files

Helo everyone,
We are currently working with SQLAclchemy and Flask on a web service. As part of this work we have a class A in file A.py:
from common import db
class A(db.Model):
...
b = relationship('B')
c = relationship('C')
...
B and C are located in two distinct files (b.py and c.py).
The current eanpoint we create needs only A and B classes, but we are forced to include C:
from common.a import A
from common.b import B
from common.c import C
class NewResource(Resource):
def get(self):
# do something with A and A.b
If I remove the import of C I get:
sqlalchemy.exc.InvalidRequestError: One or more mappers failed to
initialize - can't proceed with initialization of other mappers.
Triggering mapper: 'Mapper|A|c'. Original exception was: When
initializing mapper Mapper|A|c, expression 'C' failed to locate a name
("name 'C' is not defined"). If this is a class name, consider adding
this relationship() to the class after both
dependent classes have been defined.
We are also looking at the whole idea of seperating the class at all.
thanks for the help.

Update Custom Field Value using a "Scriptrunner for Jira" Custom Listener

Hello we are using Jira and are currently evaluating the Plugin "Scriptrunner for Jira" by Adaptavist.
I'd like to create a custom Listener which simply updates the value of a custom field. The field's type is a default textbox, nothing fancy there.
Regarding to the plugin's documentation and various web-searching, I came up with the following code:
import com.atlassian.jira.issue.CustomFieldManager
import com.atlassian.jira.issue.Issue
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.MutableIssue
def issue = event.issue as Issue
MutableIssue issueToUpdate = (MutableIssue) issue;
CustomFieldManager customFieldManager = ComponentAccessor.getCustomFieldManager();
def cf = customFieldManager.getCustomFieldObjects(issue).find {it.name == 'My CustomField'}
issueToUpdate.setCustomFieldValue(cf, "myvalue");
The validator does not complain about anything here and the script seems to be executed without any errors. The problem is that the custom field's value is simply not updated. Maybe some of you guys have the missing piece.
Every line seems to be needed as the validator complains otherwise. Thank you in advance for your help.
I just got an answer from Adaptavist that is finally working. Please find the working code below:
import com.atlassian.jira.issue.Issue
import com.atlassian.jira.issue.ModifiedValue
import com.atlassian.jira.issue.util.DefaultIssueChangeHolder
import com.atlassian.jira.component.ComponentAccessor
def issue = event.issue as Issue
def customFieldManager = ComponentAccessor.getCustomFieldManager()
def tgtField = customFieldManager.getCustomFieldObjects(event.issue).find {it.name == "My CustomField"}
def changeHolder = new DefaultIssueChangeHolder()
tgtField.updateValue(null, issue, new ModifiedValue(issue.getCustomFieldValue(tgtField), "myvalue"),changeHolder)

Error: Annotator "sentiment" requires annotator "binarized_trees"

Could any one help me when this error can happen. Any idea is really appreciated. Do I need to add anything, any annotator. Is this an issues with the data or the model that i am passing apart from default model.
i am using Standford NLP 3.4.1 to do the sentiment calculation for social media data. When i run it through spark/scala job i am getting this following error for some data.
java.lang.IllegalArgumentException: annotator "sentiment" requires annotator "binarized_trees"
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:300)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:129)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:125)
at com.pipeline.sentiment.NonTwitterSentimentAndThemeProcessorAction$.create(NonTwitterTextEnrichmentComponent.scala:142)
at com.pipeline.sentiment.NonTwitterTextEnrichmentInitialized.action$lzycompute(NonTwitterTextEnrichmentComponent.scala:52)
at com.pipeline.sentiment.NonTwitterTextEnrichmentInitialized.action(NonTwitterTextEnrichmentComponent.scala:50)
at com.pipeline.sentiment.NonTwitterTextEnrichmentInitialized.action(NonTwitterTextEnrichmentComponent.scala:49)
here is the code i have in scala
def create(features: Seq[String] = Seq("tokenize", "ssplit", "pos","parse","sentiment")): TwitterSentimentAndThemeAction = {
println("comes inside the TwitterSentimentAndThemeProcessorAction create method")
val props = new Properties()
props.put("annotators", features.mkString(", "))
props.put(""pos.model", "tagger/gate-EN-twitter.model");
props.put("parse.model", "tagger/englishSR.ser.gz");
val pipeline = new StanfordCoreNLP(props)
Any help is really appreciated. Thanks for the help
...Are you sure this is the error you get? With your code, I get an error
Loading parser from serialized file tagger/englishSR.ser.gz ...edu.stanford.nlp.io.RuntimeIOException: java.io.IOException: Unable to resolve "tagger/englishSR.ser.gz" as either class path, filename or URL
This makes much more sense. The shift reduce parser models lives at edu/stanford/nlp/models/srparser/englishSR.ser.gz. If I don't use the shift reduce model, the code as written works fine for me; likewise if I include the model path above it works ok.
The exact code I tried is:
#!/bin/bash
exec scala -J-mx4g "$0" "$#"
!#
import scala.collection.JavaConversions._
import edu.stanford.nlp.pipeline._
import java.util._
val props = new Properties()
props.put("annotators", Seq("tokenize", "ssplit", "pos","parse","sentiment").mkString(", "))
props.put("parse.model", "edu/stanford/nlp/models/srparser/englishSR.ser.gz");
val pipeline = new StanfordCoreNLP(props)

Resources