Slick 3.0 best practices - slick

Coming from JPA and Hibernate, I find using Slick pretty straight forward apart from using some Joins and Aggregate queries.
Now is there some best practices that I could employ when defining my tables and the mapping case classes? I currently have a single class that holds all the queries, Table definitions and the case classes to which I map to when I query the database. This class deals with say 10 to 15 tables and the file has become quite big.
I'm now wondering if I should split this into different packages for readability. What do you guys think?

You can separate out slick mapping table from slick queries. Put slick mapping table into trait. Mixed this trait where you want write quires and join into table. for example:
package com.knol.db.repo
import com.knol.db.connection.DBComponent
import scala.concurrent.Future
trait BankRepository extends BankTable { this: DBComponent =>
import driver.api._
def create(bank: Bank): Future[Int] = db.run { bankTableAutoInc += bank }
def update(bank: Bank): Future[Int] = db.run { bankTableQuery.filter(_.id === bank.id.get).update(bank) }
def getById(id: Int): Future[Option[Bank]] = db.run { bankTableQuery.filter(_.id === id).result.headOption }
def getAll(): Future[List[Bank]] = db.run { bankTableQuery.to[List].result }
def delete(id: Int): Future[Int] = db.run { bankTableQuery.filter(_.id === id).delete }
}
private[repo] trait BankTable { this: DBComponent =>
import driver.api._
private[BankTable] class BankTable(tag: Tag) extends Table[Bank](tag,"bank") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val name = column[String]("name")
def * = (name, id.?) <> (Bank.tupled, Bank.unapply)
}
protected val bankTableQuery = TableQuery[BankTable]
protected def bankTableAutoInc = bankTableQuery returning bankTableQuery.map(_.id)
}
case class Bank(name: String, id: Option[Int] = None)
For joining two table:
package com.knol.db.repo
import com.knol.db.connection.DBComponent
import scala.concurrent.Future
trait BankInfoRepository extends BankInfoTable { this: DBComponent =>
import driver.api._
def create(bankInfo: BankInfo): Future[Int] = db.run { bankTableInfoAutoInc += bankInfo }
def update(bankInfo: BankInfo): Future[Int] = db.run { bankInfoTableQuery.filter(_.id === bankInfo.id.get).update(bankInfo) }
def getById(id: Int): Future[Option[BankInfo]] = db.run { bankInfoTableQuery.filter(_.id === id).result.headOption }
def getAll(): Future[List[BankInfo]] = db.run { bankInfoTableQuery.to[List].result }
def delete(id: Int): Future[Int] = db.run { bankInfoTableQuery.filter(_.id === id).delete }
def getBankWithInfo(): Future[List[(Bank, BankInfo)]] =
db.run {
(for {
info <- bankInfoTableQuery
bank <- info.bank
} yield (bank, info)).to[List].result
}
def getAllBankWithInfo(): Future[List[(Bank, Option[BankInfo])]] =
db.run {
bankTableQuery.joinLeft(bankInfoTableQuery).on(_.id === _.bankId).to[List].result
}
}
private[repo] trait BankInfoTable extends BankTable { this: DBComponent =>
import driver.api._
private[BankInfoTable] class BankInfoTable(tag: Tag) extends Table[BankInfo](tag,"bankinfo") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val owner = column[String]("owner")
val bankId = column[Int]("bank_id")
val branches = column[Int]("branches")
def bank = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (owner, branches, bankId, id.?) <> (BankInfo.tupled, BankInfo.unapply)
}
protected val bankInfoTableQuery = TableQuery[BankInfoTable]
protected def bankTableInfoAutoInc = bankInfoTableQuery returning bankInfoTableQuery.map(_.id)
}
case class BankInfo(owner: String, branches: Int, bankId: Int, id: Option[Int] = None)
For more explanation see Blog and github.
It might be helpful!!!

Related

Spark SQL doesn't call my UDT equals/hashcode methods

I want to implement my comparison operators(equals, hashcode, ordering) in a data type defined by me in Spark SQL. Although Spark SQL UDT's still remains private, I follow some examples like this, to workaround this situation.
I have a class called MyPoint:
#SQLUserDefinedType(udt = classOf[MyPointUDT])
case class MyPoint(x: Double, y: Double) extends Serializable {
override def hashCode(): Int = {
println("hash code")
31 * (31 * x.hashCode()) + y.hashCode()
}
override def equals(other: Any): Boolean = {
println("equals")
other match {
case that: MyPoint => this.x == that.x && this.y == that.y
case _ => false
}
}
Then, I have the UDT class:
private class MyPointUDT extends UserDefinedType[MyPoint] {
override def sqlType: DataType = ArrayType(DoubleType, containsNull = false)
override def serialize(obj: MyPoint): ArrayData = {
obj match {
case features: MyPoint =>
new GenericArrayData2(Array(features.x, features.y))
}
}
override def deserialize(datum: Any): MyPoint = {
datum match {
case data: ArrayData if data.numElements() == 2 => {
val arr = data.toDoubleArray()
new MyPoint(arr(0), arr(1))
}
}
}
override def userClass: Class[MyPoint] = classOf[MyPoint]
override def asNullable: MyPointUDT = this
}
Then I create a simple DataFrame:
val p1 = new MyPoint(1.0, 2.0)
val p2 = new MyPoint(1.0, 2.0)
val p3 = new MyPoint(10.0, 20.0)
val p4 = new MyPoint(11.0, 22.0)
val points = Seq(
("P1", p1),
("P2", p2),
("P3", p3),
("P4", p4)
).toDF("label", "point")
points.registerTempTable("points")
spark.sql("SELECT Distinct(point) FROM points").show()
The problem is: Why the SQL query doesn't execute the equals method inside MyPoint class? How comparasions are being made? How can I implement my comparasion operators in this example?

Cassandra quill Codec not found for requested operation

I followed the example given in the docs but the following fails with Codec not found for requested operation: [varchar <-> java.util.UUID]. How is one supposed to provide a custom Cassandra codec with Quill?
import java.util.UUID
import javax.inject.Inject
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Await, Future}
import io.getquill.{CassandraAsyncContext, LowerCase, MappedEncoding}
import scala.concurrent.duration._
object Main extends App {
case class MyOrder(id: String, uuid: UUID)
trait OrderRepository {
implicit val encodeUUID: MappedEncoding[UUID, String] = MappedEncoding[UUID, String](_.toString)
implicit val decodeUUID: MappedEncoding[String, UUID] = MappedEncoding[String, UUID](UUID.fromString)
def list(): Future[Iterable[MyOrder]]
def get(id: String): Future[Option[MyOrder]]
}
class OrderRepositoryImpl(db: CassandraAsyncContext[LowerCase]) extends OrderRepository {
import db._
override def list(): Future[Iterable[MyOrder]] = {
run(quote(query[MyOrder]))
}
override def get(id: String): Future[Option[MyOrder]] = {
val q = quote {
(id: String) =>
query[MyOrder].filter(_.id == id)
}
val res = run(q(lift(id.toString)))
res.map(_.headOption)
}
}
val db = new CassandraAsyncContext[LowerCase]("db")
val repo = new OrderRepositoryImpl(db)
val maybeOrder = Await.result(repo.get("id1"), 2 seconds)
println(s"maybeOrder $maybeOrder")
}
and the Cassandra ddl:
CREATE TABLE myorder (
id text,
uuid text,
PRIMARY KEY (id)
);
INSERT INTO myorder (
id,
uuid
) values (
'id1',
'a8a8a416-2436-4e18-82b3-d5881c8fec1a'
);
Obviously I can just create a class whose uuid field is of type String but the goal here is to figure out how to use custom decoders.

Upper bound for Slick 3.1.1 query type

I have a DAO helper trait that provides common functionality to DAOs. It needs to be able to access the table query, and run actions. I'm having trouble defining or otherwise providing the query type to the helper trait.
Below is some code, also available in a short demo project on GitHub, in the action branch.
First, db is defined in trait DBComponent:
trait DBComponent {
import slick.driver.JdbcProfile
val driver: JdbcProfile
import driver.api._
val db: Database
}
The classes to be persisted extend HasId:
trait HasId {
def id: Option[Int] = None
}
Here is one such class to be persisted:
case class BankInfo(
owner: String,
branches: Int,
bankId: Int,
override val id: Option[Int] = None
) extends HasId
The problem is that I don't know how to set QueryType in the following DAO helper trait; I expect that most of the errors that follow are a consequence of the improper type that I used:
/** Handles all actions pertaining to HasId or that do not require parameters */
trait DbAction[T <: HasId] { this: DBComponent =>
import driver.api._ // defines DBIOAction
type QueryType <: slick.lifted.TableQuery[Table[T]] // this type is wrong! What should it be?
def tableQuery: QueryType
// Is this defined correctly?
def run[R](action: DBIOAction[R, NoStream, Nothing]): Future[R] = db.run { action }
def deleteById(id: Option[Long]): Unit =
for { i <- id } run { tableQuery.filter(_.id === id).delete } // id is unknown because QueryType is wrong
def findAll: Future[List[T]] = run { tableQuery.to[List].result } // also b0rked
// Remaining methods shown on GitHub
}
FYI, here is how the above will be used. First, the trait that defines the table query:
trait BankInfoTable extends BankTable { this: DBComponent =>
import driver.api._
class BankInfoTable(tag: Tag) extends Table[BankInfo](tag, "bankinfo") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val owner = column[String]("owner")
val bankId = column[Int]("bank_id")
val branches = column[Int]("branches")
def bankFK = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (owner, branches, bankId, id.?) <> (BankInfo.tupled, BankInfo.unapply)
}
val tableQuery = TableQuery[BankInfoTable]
def autoInc = tableQuery returning tableQuery.map(_.id)
}
It all comes together here:
trait BankInfoRepositoryLike extends BankInfoTable with DbAction[BankInfo]
{ this: DBComponent =>
import driver.api._
#inline def updateAsync(bankInfo: BankInfo): Future[Int] =
run { tableQuery.filter(_.id === bankInfo.id.get).update(bankInfo) }
#inline def getByIdAsync(id: Int): Future[Option[BankInfo]] =
run { tableQuery.filter(_.id === id).result.headOption }
}
Suggestions?
Full working example:
package com.knol.db.repo
import com.knol.db.connection.DBComponent
import com.knol.db.connection.MySqlDBComponent
import scala.concurrent.{Await, Future}
import concurrent.duration.Duration
trait LiftedHasId {
def id: slick.lifted.Rep[Int]
}
trait HasId {
def id: Option[Int]
}
trait GenericAction[T <: HasId]{this: DBComponent =>
import driver.api._
type QueryType <: slick.lifted.TableQuery[_ <: Table[T] with LiftedHasId]
val tableQuery: QueryType
#inline def deleteAsync(id: Int): Future[Int] = db.run { tableQuery.filter(_.id === id).delete }
#inline def delete(id: Int): Int = Await.result(deleteAsync(id), Duration.Inf)
#inline def deleteAllAsync(): Future[Int] = db.run { tableQuery.delete }
#inline def deleteAll(): Int = Await.result(deleteAllAsync(), Duration.Inf)
#inline def getAllAsync: Future[List[T]] = db.run { tableQuery.to[List].result }
#inline def getAll: List[T] = Await.result(getAllAsync, Duration.Inf)
#inline def getByIdAsync(id: Int): Future[Option[T]] =
db.run { tableQuery.filter(_.id === id).result.headOption }
#inline def getById(id: Int): Option[T] = Await.result(getByIdAsync(id), Duration.Inf)
#inline def deleteById(id: Option[Int]): Unit =
db.run { tableQuery.filter(_.id === id).delete }
#inline def findAll: Future[List[T]] = db.run { tableQuery.to[List].result }
}
trait BankInfoRepository extends BankInfoTable with GenericAction[BankInfo] { this: DBComponent =>
import driver.api._
type QueryType = TableQuery[BankInfoTable]
val tableQuery=bankInfoTableQuery
def create(bankInfo: BankInfo): Future[Int] = db.run { bankTableInfoAutoInc += bankInfo }
def update(bankInfo: BankInfo): Future[Int] = db.run { bankInfoTableQuery.filter(_.id === bankInfo.id.get).update(bankInfo) }
/**
* Get bank and info using foreign key relationship
*/
def getBankWithInfo(): Future[List[(Bank, BankInfo)]] =
db.run {
(for {
info <- bankInfoTableQuery
bank <- info.bank
} yield (bank, info)).to[List].result
}
/**
* Get all bank and their info.It is possible some bank do not have their product
*/
def getAllBankWithInfo(): Future[List[(Bank, Option[BankInfo])]] =
db.run {
bankTableQuery.joinLeft(bankInfoTableQuery).on(_.id === _.bankId).to[List].result
}
}
private[repo] trait BankInfoTable extends BankTable{ this: DBComponent =>
import driver.api._
class BankInfoTable(tag: Tag) extends Table[BankInfo](tag, "bankinfo") with LiftedHasId {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val owner = column[String]("owner")
val bankId = column[Int]("bank_id")
val branches = column[Int]("branches")
def bank = foreignKey("bank_product_fk", bankId, bankTableQuery)(_.id)
def * = (owner, branches, bankId, id.?) <> (BankInfo.tupled, BankInfo.unapply)
}
protected val bankInfoTableQuery = TableQuery[BankInfoTable]
protected def bankTableInfoAutoInc = bankInfoTableQuery returning bankInfoTableQuery.map(_.id)
}
object BankInfoRepository extends BankInfoRepository with MySqlDBComponent
case class BankInfo(owner: String, branches: Int, bankId: Int, id: Option[Int] = None) extends HasId
You're trying to abstract over the result type with HasId but your code doesn't actually care about that. The id values that you're using are the ones from the lifted type, i.e. the table row class, so you need an abstraction at this level:
trait LiftedHasId {
def id: slick.lifted.Rep[Int]
}
Then in DbAction:
type QueryType <: slick.lifted.TableQuery[_ <: Table[T] with LiftedHasId]
And BankInfoTable must define a concrete type for it:
type QueryType = slick.lifted.TableQuery[BankInfoTable]
Or you could add it as a second type parameter to DbAction (just like Query has two type parameters for the lifted type and the result type).

Slick DBIO: master-detail

I need to collect report data from a master-detail relations. Here is a simplified example:
case class Person(id: Int, name: String)
case class Order(id: String, personId: Int, description: String)
class PersonTable(tag: Tag) extends Table[Person](tag, "person") {
def id = column[Int]("id")
def name = column[String]("name")
override def * = (id, name) <>(Person.tupled, Person.unapply)
}
class OrderTable(tag: Tag) extends Table[Order](tag, "order") {
def id = column[String]("id")
def personId = column[Int]("personId")
def description = column[String]("description")
override def * = (id, personId, description) <>(Order.tupled, Order.unapply)
}
val persons = TableQuery[PersonTable]
val orders = TableQuery[OrderTable]
case class PersonReport(nameToDescription: Map[String, Seq[String]])
/** Some complex function that cannot be expressed in SQL and
* in slick's #join.
*/
def myScalaCondition(person: Person): Boolean =
person.name.contains("1")
// Doesn't compile:
// val reportDbio1:DBIO[PersonReport] =
// (for{ allPersons <- persons.result
// person <- allPersons
// if myScalaCondition(person)
// descriptions <- orders.
// filter(_.personId == person.id).
// map(_.description).result
// } yield (person.name, descriptions)
// ).map(s => PersonReport(s.toMap))
val reportDbio2: DBIO[PersonReport] =
persons.result.flatMap {
allPersons =>
val dbios = allPersons.
filter(myScalaCondition).map { person =>
orders.
filter(_.personId == person.id).
map(_.description).result.map { seq => (person.name, seq) }
}
DBIO.sequence(dbios)
}.map(ps => PersonReport(ps.toMap))
It looks far away from straightforward. When I need to collect master-detail data with 3 levels, it becomes incomprehensible.
Is there a better way?

Custom unpickler for Enumeration

I have an enumeration that I'm trying to pickle and unpickle with pickling 0.8.0 and scala 2.11:
object CommandType extends Enumeration {
val Push, Pop = Value
}
Pickling cannot do it automagically at the moment. The custom pickler-unpickler looks like this:
class CommandTypePickler(implicit val format: PickleFormat)
extends SPickler[CommandType.Value] with Unpickler[CommandType.Value] with LazyLogging {
def pickle(picklee: CommandType.Value, builder: PBuilder): Unit = {
builder.beginEntry(picklee)
builder.putField("commandType", b =>
b.hintTag(stringTag).beginEntry(picklee.toString).endEntry()
)
builder.endEntry()
}
override def unpickle(tag: => FastTypeTag[_], reader: PReader): CommandType.Value = {
val ctReader = reader.readField("commandType")
val tag = ctReader.beginEntry()
logger.debug(s"tag is ${tag.toString}")
val value = stringUnpickler.unpickle(tag, ctReader).asInstanceOf[String]
ctReader.endEntry()
CommandType.withName(value)
}
}
Serialized enumeration:
{
"tpe": "scala.Enumeration.Value",
"commandType": {
"tpe": "java.lang.String",
"value": "Push"
}
}
When unpickling, this throws the following: ScalaReflectionException: : class scala.Enumeration.Value in JavaMirror with sun.misc.Launcher$AppClassLoader#5c3eeab3 of type class sun.misc.Launcher$AppClassLoader with classpath ... not found. What am I doing wrong?

Resources