I have an enumeration that I'm trying to pickle and unpickle with pickling 0.8.0 and scala 2.11:
object CommandType extends Enumeration {
val Push, Pop = Value
}
Pickling cannot do it automagically at the moment. The custom pickler-unpickler looks like this:
class CommandTypePickler(implicit val format: PickleFormat)
extends SPickler[CommandType.Value] with Unpickler[CommandType.Value] with LazyLogging {
def pickle(picklee: CommandType.Value, builder: PBuilder): Unit = {
builder.beginEntry(picklee)
builder.putField("commandType", b =>
b.hintTag(stringTag).beginEntry(picklee.toString).endEntry()
)
builder.endEntry()
}
override def unpickle(tag: => FastTypeTag[_], reader: PReader): CommandType.Value = {
val ctReader = reader.readField("commandType")
val tag = ctReader.beginEntry()
logger.debug(s"tag is ${tag.toString}")
val value = stringUnpickler.unpickle(tag, ctReader).asInstanceOf[String]
ctReader.endEntry()
CommandType.withName(value)
}
}
Serialized enumeration:
{
"tpe": "scala.Enumeration.Value",
"commandType": {
"tpe": "java.lang.String",
"value": "Push"
}
}
When unpickling, this throws the following: ScalaReflectionException: : class scala.Enumeration.Value in JavaMirror with sun.misc.Launcher$AppClassLoader#5c3eeab3 of type class sun.misc.Launcher$AppClassLoader with classpath ... not found. What am I doing wrong?
Related
I have a data provider that has an Observable<Int> as part of the public API. My class under test maps this into a Observable<String>.
How do I create a mock so that it can send out different values on the data provider's observable?
I can do it using a Fake object, but that is a lot of work that I don't think is necessary with MockK.
Simplified code:
interface DataProvider {
val numberData:Observable<Int>
}
class FakeDataProvider():DataProvider {
private val _numberData = BehaviorSubject.createDefault(0)
override val numberData = _numberData.hide()
// Note: the internals of this class cause the _numberData changes.
// I can use this method to fake the changes for this fake object,
// but the real class doesn't have this method.
fun fakeNewNumber( newNumber:Int ) {
_numberData.onNext( newNumber )
}
}
interface ClassUnderTest {
val stringData:Observable<String>
}
class MyClassUnderTest( dataProvider: DataProvider ):ClassUnderTest {
override val stringData = dataProvider.numberData.map { "string = " + it.toString() }
}
class MockKTests {
#Test fun testUsingFakeDataProvider() {
val fakeDataProvider = FakeDataProvider()
val classUnderTest = MyClassUnderTest( fakeDataProvider )
val stringDataTestObserver = TestObserver<String>()
classUnderTest.stringData.subscribe( stringDataTestObserver )
fakeDataProvider.fakeNewNumber( 1 )
fakeDataProvider.fakeNewNumber( 2 )
fakeDataProvider.fakeNewNumber( 3 )
// Note we are expecting the initial value of 0 to also come through
stringDataTestObserver.assertValuesOnly( "string = 0", "string = 1","string = 2","string = 3" )
}
// How do you write the mock to trigger the dataProvider observable?
#Test fun testUsingMockDataProvider() {
val mockDataProvider = mockk<DataProvider>()
// every { ... what goes here ... } just Runs
val classUnderTest = MyClassUnderTest( mockDataProvider )
val stringDataTestObserver = TestObserver<String>()
classUnderTest.stringData.subscribe( stringDataTestObserver )
// Note we are expecting the initial value of 0 to also come through
stringDataTestObserver.assertValuesOnly( "string = 0", "string = 1","string = 2","string = 3" )
}
}
Try to use following:
every { mockDataProvider.numberData } answers { Observable.range(1, 3) }
And maybe you need to use another way to make a mock object, like this:
val mockDataProvider = spyk(DataProvider())
Do something like this where we create an observable fakelist of the observable
var fakeList :List<Quiz> = (listOf<Quiz>(
Quiz("G1","fromtest","","",1)
))
var observableFakelist = Observable.fromArray(fakeList)
you can then return your observableFakelist.
I am trying to add a generic Slick Entity / Table helper class:
trait HasSlickProfile {
val profile: JdbcProfile
}
trait SlickedDBConfig
extends HasSlickProfile {
lazy val profile: JdbcProfile = ???
}
abstract class EntityHelper[E]
extends HasSlickProfile {
import profile.api._
type T <: Table[E]
def table: TableQuery[_ <: Table[E]]
// TONS OF USEFULL ENTITY / TABLE STUFF
}
but it does not compile in:
object ValueOfHelper
extends EntityHelper[ValueOf]
with SlickedDBConfig {
import profile.api._
type T = ValueOfTable
val table = ValueOfTable
}
since there are to errors found:
[error] /.../ValueOfHelper.scala:22: overriding type T in class EntityHelper with bounds <: fp.model.ValueOfHelper.profile.api.Table[fp.model.Tables.ValueOf];
[error] type T has incompatible type
[error] type T = ValueOfTable
[error] ^
[error] /.../ValueOfHelper.scala:23: overriding method table in class EntityHelper of type => fp.model.ValueOfHelper.profile.api.TableQuery[_ <: fp.model.ValueOfHelper.profile.api.Table[fp.model.Tables.ValueOf]];
[error] value table has incompatible type
[error] val table = ValueOfTable
[error] ^
The autogenerated Tables looks like this:
import slicked.codegen.SlickedDBConfig
object Tables extends {
} with Tables
with SlickedDBConfig
trait Tables {
val profile: slick.jdbc.JdbcProfile
import profile.api._
// Custom imports start
import slicked.SlickedRow
import slicked.SlickedTable
import slicked.SlickMappers._
// Custom imports end
import slick.model.ForeignKeyAction
import slick.jdbc.{GetResult => GR}
lazy val schema: profile.SchemaDescription = ValueOfTable.schema
case class ValueOf(of: String, in: String, timestamp: org.joda.time.DateTime, `val`: Int, info: Option[String] = None) extends SlickedRow
implicit def GetResultValueOf(implicit e0: GR[String], e1: GR[org.joda.time.DateTime], e2: GR[Int], e3: GR[Option[String]]): GR[ValueOf] = GR{
prs => import prs._
ValueOf.tupled((<<[String], <<[String], <<[org.joda.time.DateTime], <<[Int], <<?[String]))
}
class ValueOfTable(_tableTag: Tag) extends profile.api.Table[ValueOf](_tableTag, Some("forex"), "VALUE_OF") with SlickedTable {
def * = (of, in, timestamp, `val`, info) <> (ValueOf.tupled, ValueOf.unapply)
def ? = (Rep.Some(of), Rep.Some(in), Rep.Some(timestamp), Rep.Some(`val`), info).shaped.<>({r=>import r._; _1.map(_=> ValueOf.tupled((_1.get, _2.get, _3.get, _4.get, _5)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val of: Rep[String] = column[String]("OF", O.Length(16,varying=true))
val in: Rep[String] = column[String]("IN", O.Length(16,varying=true))
val timestamp: Rep[org.joda.time.DateTime] = column[org.joda.time.DateTime]("TIMESTAMP")
val `val`: Rep[Int] = column[Int]("VAL")
val info: Rep[Option[String]] = column[Option[String]]("INFO", O.Default(None))
val index1 = index("VALUEOF_TIMESTAMP_OF_IN_pk", (timestamp, of, in), unique=true)
}
lazy val ValueOfTable = new TableQuery(tag => new ValueOfTable(tag))
}
How do you write it to be profile agnostic?
I have a DAO helper trait that provides common functionality to DAOs. It needs to be able to access the table query, and run actions. I'm having trouble defining or otherwise providing the query type to the helper trait.
Below is some code, also available in a short demo project on GitHub, in the action branch.
First, db is defined in trait DBComponent:
trait DBComponent {
import slick.driver.JdbcProfile
val driver: JdbcProfile
import driver.api._
val db: Database
}
The classes to be persisted extend HasId:
trait HasId {
def id: Option[Int] = None
}
Here is one such class to be persisted:
case class BankInfo(
owner: String,
branches: Int,
bankId: Int,
override val id: Option[Int] = None
) extends HasId
The problem is that I don't know how to set QueryType in the following DAO helper trait; I expect that most of the errors that follow are a consequence of the improper type that I used:
/** Handles all actions pertaining to HasId or that do not require parameters */
trait DbAction[T <: HasId] { this: DBComponent =>
import driver.api._ // defines DBIOAction
type QueryType <: slick.lifted.TableQuery[Table[T]] // this type is wrong! What should it be?
def tableQuery: QueryType
// Is this defined correctly?
def run[R](action: DBIOAction[R, NoStream, Nothing]): Future[R] = db.run { action }
def deleteById(id: Option[Long]): Unit =
for { i <- id } run { tableQuery.filter(_.id === id).delete } // id is unknown because QueryType is wrong
def findAll: Future[List[T]] = run { tableQuery.to[List].result } // also b0rked
// Remaining methods shown on GitHub
}
FYI, here is how the above will be used. First, the trait that defines the table query:
trait BankInfoTable extends BankTable { this: DBComponent =>
import driver.api._
class BankInfoTable(tag: Tag) extends Table[BankInfo](tag, "bankinfo") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val owner = column[String]("owner")
val bankId = column[Int]("bank_id")
val branches = column[Int]("branches")
def bankFK = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (owner, branches, bankId, id.?) <> (BankInfo.tupled, BankInfo.unapply)
}
val tableQuery = TableQuery[BankInfoTable]
def autoInc = tableQuery returning tableQuery.map(_.id)
}
It all comes together here:
trait BankInfoRepositoryLike extends BankInfoTable with DbAction[BankInfo]
{ this: DBComponent =>
import driver.api._
#inline def updateAsync(bankInfo: BankInfo): Future[Int] =
run { tableQuery.filter(_.id === bankInfo.id.get).update(bankInfo) }
#inline def getByIdAsync(id: Int): Future[Option[BankInfo]] =
run { tableQuery.filter(_.id === id).result.headOption }
}
Suggestions?
Full working example:
package com.knol.db.repo
import com.knol.db.connection.DBComponent
import com.knol.db.connection.MySqlDBComponent
import scala.concurrent.{Await, Future}
import concurrent.duration.Duration
trait LiftedHasId {
def id: slick.lifted.Rep[Int]
}
trait HasId {
def id: Option[Int]
}
trait GenericAction[T <: HasId]{this: DBComponent =>
import driver.api._
type QueryType <: slick.lifted.TableQuery[_ <: Table[T] with LiftedHasId]
val tableQuery: QueryType
#inline def deleteAsync(id: Int): Future[Int] = db.run { tableQuery.filter(_.id === id).delete }
#inline def delete(id: Int): Int = Await.result(deleteAsync(id), Duration.Inf)
#inline def deleteAllAsync(): Future[Int] = db.run { tableQuery.delete }
#inline def deleteAll(): Int = Await.result(deleteAllAsync(), Duration.Inf)
#inline def getAllAsync: Future[List[T]] = db.run { tableQuery.to[List].result }
#inline def getAll: List[T] = Await.result(getAllAsync, Duration.Inf)
#inline def getByIdAsync(id: Int): Future[Option[T]] =
db.run { tableQuery.filter(_.id === id).result.headOption }
#inline def getById(id: Int): Option[T] = Await.result(getByIdAsync(id), Duration.Inf)
#inline def deleteById(id: Option[Int]): Unit =
db.run { tableQuery.filter(_.id === id).delete }
#inline def findAll: Future[List[T]] = db.run { tableQuery.to[List].result }
}
trait BankInfoRepository extends BankInfoTable with GenericAction[BankInfo] { this: DBComponent =>
import driver.api._
type QueryType = TableQuery[BankInfoTable]
val tableQuery=bankInfoTableQuery
def create(bankInfo: BankInfo): Future[Int] = db.run { bankTableInfoAutoInc += bankInfo }
def update(bankInfo: BankInfo): Future[Int] = db.run { bankInfoTableQuery.filter(_.id === bankInfo.id.get).update(bankInfo) }
/**
* Get bank and info using foreign key relationship
*/
def getBankWithInfo(): Future[List[(Bank, BankInfo)]] =
db.run {
(for {
info <- bankInfoTableQuery
bank <- info.bank
} yield (bank, info)).to[List].result
}
/**
* Get all bank and their info.It is possible some bank do not have their product
*/
def getAllBankWithInfo(): Future[List[(Bank, Option[BankInfo])]] =
db.run {
bankTableQuery.joinLeft(bankInfoTableQuery).on(_.id === _.bankId).to[List].result
}
}
private[repo] trait BankInfoTable extends BankTable{ this: DBComponent =>
import driver.api._
class BankInfoTable(tag: Tag) extends Table[BankInfo](tag, "bankinfo") with LiftedHasId {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val owner = column[String]("owner")
val bankId = column[Int]("bank_id")
val branches = column[Int]("branches")
def bank = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (owner, branches, bankId, id.?) <> (BankInfo.tupled, BankInfo.unapply)
}
protected val bankInfoTableQuery = TableQuery[BankInfoTable]
protected def bankTableInfoAutoInc = bankInfoTableQuery returning bankInfoTableQuery.map(_.id)
}
object BankInfoRepository extends BankInfoRepository with MySqlDBComponent
case class BankInfo(owner: String, branches: Int, bankId: Int, id: Option[Int] = None) extends HasId
You're trying to abstract over the result type with HasId but your code doesn't actually care about that. The id values that you're using are the ones from the lifted type, i.e. the table row class, so you need an abstraction at this level:
trait LiftedHasId {
def id: slick.lifted.Rep[Int]
}
Then in DbAction:
type QueryType <: slick.lifted.TableQuery[_ <: Table[T] with LiftedHasId]
And BankInfoTable must define a concrete type for it:
type QueryType = slick.lifted.TableQuery[BankInfoTable]
Or you could add it as a second type parameter to DbAction (just like Query has two type parameters for the lifted type and the result type).
Coming from JPA and Hibernate, I find using Slick pretty straight forward apart from using some Joins and Aggregate queries.
Now is there some best practices that I could employ when defining my tables and the mapping case classes? I currently have a single class that holds all the queries, Table definitions and the case classes to which I map to when I query the database. This class deals with say 10 to 15 tables and the file has become quite big.
I'm now wondering if I should split this into different packages for readability. What do you guys think?
You can separate out slick mapping table from slick queries. Put slick mapping table into trait. Mixed this trait where you want write quires and join into table. for example:
package com.knol.db.repo
import com.knol.db.connection.DBComponent
import scala.concurrent.Future
trait BankRepository extends BankTable { this: DBComponent =>
import driver.api._
def create(bank: Bank): Future[Int] = db.run { bankTableAutoInc += bank }
def update(bank: Bank): Future[Int] = db.run { bankTableQuery.filter(_.id === bank.id.get).update(bank) }
def getById(id: Int): Future[Option[Bank]] = db.run { bankTableQuery.filter(_.id === id).result.headOption }
def getAll(): Future[List[Bank]] = db.run { bankTableQuery.to[List].result }
def delete(id: Int): Future[Int] = db.run { bankTableQuery.filter(_.id === id).delete }
}
private[repo] trait BankTable { this: DBComponent =>
import driver.api._
private[BankTable] class BankTable(tag: Tag) extends Table[Bank](tag,"bank") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val name = column[String]("name")
def * = (name, id.?) <> (Bank.tupled, Bank.unapply)
}
protected val bankTableQuery = TableQuery[BankTable]
protected def bankTableAutoInc = bankTableQuery returning bankTableQuery.map(_.id)
}
case class Bank(name: String, id: Option[Int] = None)
For joining two table:
package com.knol.db.repo
import com.knol.db.connection.DBComponent
import scala.concurrent.Future
trait BankInfoRepository extends BankInfoTable { this: DBComponent =>
import driver.api._
def create(bankInfo: BankInfo): Future[Int] = db.run { bankTableInfoAutoInc += bankInfo }
def update(bankInfo: BankInfo): Future[Int] = db.run { bankInfoTableQuery.filter(_.id === bankInfo.id.get).update(bankInfo) }
def getById(id: Int): Future[Option[BankInfo]] = db.run { bankInfoTableQuery.filter(_.id === id).result.headOption }
def getAll(): Future[List[BankInfo]] = db.run { bankInfoTableQuery.to[List].result }
def delete(id: Int): Future[Int] = db.run { bankInfoTableQuery.filter(_.id === id).delete }
def getBankWithInfo(): Future[List[(Bank, BankInfo)]] =
db.run {
(for {
info <- bankInfoTableQuery
bank <- info.bank
} yield (bank, info)).to[List].result
}
def getAllBankWithInfo(): Future[List[(Bank, Option[BankInfo])]] =
db.run {
bankTableQuery.joinLeft(bankInfoTableQuery).on(_.id === _.bankId).to[List].result
}
}
private[repo] trait BankInfoTable extends BankTable { this: DBComponent =>
import driver.api._
private[BankInfoTable] class BankInfoTable(tag: Tag) extends Table[BankInfo](tag,"bankinfo") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val owner = column[String]("owner")
val bankId = column[Int]("bank_id")
val branches = column[Int]("branches")
def bank = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (owner, branches, bankId, id.?) <> (BankInfo.tupled, BankInfo.unapply)
}
protected val bankInfoTableQuery = TableQuery[BankInfoTable]
protected def bankTableInfoAutoInc = bankInfoTableQuery returning bankInfoTableQuery.map(_.id)
}
case class BankInfo(owner: String, branches: Int, bankId: Int, id: Option[Int] = None)
For more explanation see Blog and github.
It might be helpful!!!
I'm trying to create a user defined type in spark sql, but I receive:
com.ubs.ged.risk.stdout.spark.ExamplePointUDT cannot be cast to org.apache.spark.sql.types.StructType even when using their example. Has anyone made this work?
My code:
test("udt serialisation") {
val points = Seq(new ExamplePoint(1.3, 1.6), new ExamplePoint(1.3, 1.8))
val df = SparkContextForStdout.context.parallelize(points).toDF()
}
#SQLUserDefinedType(udt = classOf[ExamplePointUDT])
case class ExamplePoint(val x: Double, val y: Double)
/**
* User-defined type for [[ExamplePoint]].
*/
class ExamplePointUDT extends UserDefinedType[ExamplePoint] {
override def sqlType: DataType = ArrayType(DoubleType, false)
override def pyUDT: String = "pyspark.sql.tests.ExamplePointUDT"
override def serialize(obj: Any): Seq[Double] = {
obj match {
case p: ExamplePoint =>
Seq(p.x, p.y)
}
}
override def deserialize(datum: Any): ExamplePoint = {
datum match {
case values: Seq[_] =>
val xy = values.asInstanceOf[Seq[Double]]
assert(xy.length == 2)
new ExamplePoint(xy(0), xy(1))
case values: util.ArrayList[_] =>
val xy = values.asInstanceOf[util.ArrayList[Double]].asScala
new ExamplePoint(xy(0), xy(1))
}
}
override def userClass: Class[ExamplePoint] = classOf[ExamplePoint]
}
The usefull stackstrace is this:
com.ubs.ged.risk.stdout.spark.ExamplePointUDT cannot be cast to org.apache.spark.sql.types.StructType
java.lang.ClassCastException: com.ubs.ged.risk.stdout.spark.ExamplePointUDT cannot be cast to org.apache.spark.sql.types.StructType
at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:316)
at org.apache.spark.sql.SQLContext$implicits$.rddToDataFrameHolder(SQLContext.scala:254)
It seems that the UDT needs to be used inside of another class to work (as the type of a field). One solution to use it directly is to wrap it into a Tuple1:
test("udt serialisation") {
val points = Seq(new Tuple1(new ExamplePoint(1.3, 1.6)), new Tuple1(new ExamplePoint(1.3, 1.8)))
val df = SparkContextForStdout.context.parallelize(points).toDF()
df.collect().foreach(println(_))
}