Slick 3 return custom case class from query - slick

Currently I have something like this:
val q = for {
department <- departments if department.id === x
employee <- employees if employee.departmentId === department.id
} yield (department, employee)
which will give me:
(sales, john)
(sales, bob)
(finance, william)
(finance, helen)
I then group the results by departments:
val grouped = results.groupBy(_._1).mapValues(_.map(_._2))
to give me:
(sales -> (john, bob))
(finance -> (wiliam, helen)
I would like to avoid the tuples. Whilst it's quite clear in the simple example it will quickly become unmanageable if I want the department, manager, deputy and list of employees in a structured format. This is especially true if the query and the result processing are not close to each other in the source code.
How can I yield something other than a tuple in the query?
I tried to yield a case class:
case class DeptEmployeeRow(department: Department, employee: Employee)
val q = for {
department <- departments if department.id === x
employee <- employee if employee.id
} yield DeptEmployeeRow(department, employee)
but slick doesn't like this. Using a Monomorphic case class and slick's CaseClassShape doesn't work because it only support built in types i.e. I could use:
yield DeptEmployeeRow(department.name, employee.name)
but not
yield DeptEmployeeRow(department, employee)

Tuples are in fact quite powerful especially in the context of pattern matching. For example you can access your tuple contents like this:
case class DeptEmployeeRow(department: Department, employee: Employee)
val q = for {
department <- departments if department.id === x
employee <- employees if employee.departmentId === department.id
} yield (department, employee)
Access tuple using pattern matching:
val result1: DeptEmployeeRow = db.run(q.result).map {
case (department, employee) => DeptEmployeeRow(department, employee)
}
Or use a shortcut:
val result2: DeptEmployeeRow = db.run(q.result).map(_.map(DeptEmployeeRow.tupled))
You can take this even further modelling 1:n relations:
case class DeptWithEmployees(department: Department, employees: Seq[Employee])
val result3: DeptWithEmployees = db.run(q.result).map { results =>
results.groupBy(_._1).map { // assumption that _._1 is your department id
case (dept, grp) => DeptWithEmployees(dept, grp.map(_._2))
}
}

Related

Ecto - select all fields from all tables in a 5 table join

I have a products table, a categories table, and a shops table, as well as joining tables product_shops, and product_categories.
I want to select all products that have a category which is in the conn.query_params, that are in shops that are within a certain distance which is specified in the conn.query_params. And I want to return pretty much every field of the product, category, and shop that was selected.
I have this:
get "/products" do
query = conn.query_params
point = %Geo.Point{coordinates: {String.to_float(query["longitude"]), String.to_float(query["latitude"])}, srid: 4326}
shops = within(Shop, point, String.to_float(query["distanceFromPlaceValue"]) * 1000) |> order_by_nearest(point) |> select_with_distance(point) |> Api.Repo.all
categories = Enum.map(query["categories"], fn(x) -> String.to_integer(x) end)
shop_ids = Enum.map(shops, fn(x) -> x.id end)
query1 = from p in Product,
join: ps in ProductShop, on: p.id == ps.p_id,
join: s in Shop, on: s.id == ps.s_id,
join: pc in ProductCategory, on: p.id == pc.p_id,
join: c in Category, on: c.id == pc.c_id,
where: Enum.member?(categories, c.id),
where: Enum.member?(shop_ids, s.id),
select: p, c, s,
group_by s,
order_by s.distance
In the code above, the shops variable holds all the shops that are within the specified distance.
this is inside shop.ex to get all shops within distance, ordered from the closest shop:
def within(query, point, radius_in_m) do
{lng, lat} = point.coordinates
from(shop in query, where: fragment("ST_DWithin(?::geography, ST_SetSRID(ST_MakePoint(?, ?), ?), ?)", shop.point, ^lng, ^lat, ^point.srid, ^radius_in_m))
end
def order_by_nearest(query, point) do
{lng, lat} = point.coordinates
from(shop in query, order_by: fragment("? <-> ST_SetSRID(ST_MakePoint(?,?), ?)", shop.point, ^lng, ^lat, ^point.srid))
end
def select_with_distance(query, point) do
{lng, lat} = point.coordinates
from(shop in query, select: %{shop | distance: fragment("ST_Distance_Sphere(?, ST_SetSRID(ST_MakePoint(?,?), ?))", shop.point, ^lng, ^lat, ^point.srid)})
end
This is the shop schema and the distance field gets populated when the select_with_distance variable is called.
#derive {Poison.Encoder, only: [:name, :place_id, :point]}
schema "shops" do
field :name, :string
field :place_id, :string
field :point, Geo.Point
field :distance, :float, virtual: true
timestamps()
end
My current error is in the select: p, c, s line as I'm unsure how to select the whole lot:
== Compilation error on file lib/api/router.ex ==
** (SyntaxError) lib/api/router.ex:153: syntax error before: c
(elixir) lib/kernel/parallel_compiler.ex:117: anonymous fn/4 in Kernel.ParallelCompiler.spawn_compilers/1
Also unsure if the group_by should be the shop. I just know I want to return all fields and only unique fields, ordered by proximity of the shop. I feel most of the way there but a bit stuck on the select, group_by, and order_by.
EDIT: Thanks Dogbert for fixing those two issues in the comments.
currently code is:
products_shops_categories = from p in Product,
join: ps in ProductShop, on: p.id == ps.p_id,
join: s in Shop, on: s.id == ps.s_id,
join: pc in ProductCategory, on: p.id == pc.p_id,
join: c in Category, on: c.id == pc.c_id,
where: c.id in ^categories,
where: s.id in ^shop_ids,
select: {p, c, s}
group_by s,
order_by s.distance
I have this error now:
** (Ecto.Query.CompileError) `order_by(s.distance())` is not a valid query expression.
* If you intended to call a database function, please check the documentation
for Ecto.Query to see the supported database expressions
* If you intended to call an Elixir function or introduce a value,
you need to explicitly interpolate it with ^
expanding macro: Ecto.Query.group_by/2
lib/api/router.ex:155: Api.Router.do_match/4
(elixir) lib/kernel/parallel_compiler.ex:117: anonymous fn/4 in Kernel.ParallelCompiler.spawn_compilers/1

composing single insert statement in slick 3

This is the case class representing the entire row:
case class CustomerRow(id: Long, name: String, 20 other fields ...)
I have a shape case class that only 'exposes' a subset of columns and it is used when user creates/updates a customer:
case class CustomerForm(name: String, subset of all fields ...)
I can use CustomerForm for updates. However I can't use it for inserts. There are some columns not in CustomerForm that are required (not null) and can only be provided by the server. What I do now is that I create CustomerRow from CustomerForm:
def form2row(form: CustomerForm, id: Long, serverOnlyValue: Long, etc...) = CustomerRow(
id = id,
serverOnlyColumn = serverOnlyValue,
name = form.name.
// and so on for 20 more tedious lines of code
)
and use it for insert.
Is there a way to compose insert in slick so I can remove that tedious form2row function?
Something like:
(customers.map(formShape) += form) andAlsoOnTheSameRow .map(c => (c.id, c.serverOnlyColumn)) += (id, someValue)
?
Yes, You can do this like:
case class Person(name: String, email: String, address: String, id: Option[Int] = None)
case class NameAndAddress(name: String,address: String)
class PersonTable(tag: Tag) extends Table[Person](tag, "person") {
val id = column[Int]("id", O.PrimaryKey, O.AutoInc)
val name = column[String]("name")
val email = column[String]("email")
val address = column[String]("address")
//for partial insert
def nameWithAddress = (name, address)<>(NameAndAddress.tupled, NameAndAddress.unapply)
def * = (name, email, address, id.?) <> (Person.tupled, Person.unapply)
}
val personTableQuery = TableQuery[PersonTable]
// insert partial fields
personTableQuery.map(_.nameWithAddress) += NameAndAddress("abc", "xyz")
Make sure, You are aware of nullable fields they should be in form of Option[T] where T is filed type.In my example case, email should be Option[String] instead of String.

How to use DBIO.sequence and avoid StackOverflowError

I am a Slick beginner, just experimenting with Slick 3.0 RC1. In my first project I'd like to import data from a text file to various tables. The whole import should happen in sequence as the data are in the file within one transaction.
I tried to create an Iterator of the actions and wrap them in a DBO.sequence.
The problem is that when the number of rows is big then the import fails with a StackOverflowError. Obviously I have misunderstood how to use Slick to do what I want to do. Is there a better way how to chain a large number into one transaction?
Here a simplified version of my code, where instead of reading the data from a file, I simply "import" the numbers from a Range. The even ones to the table XS, the odd ones to the table YS.
val db = Database.forConfig("h2mem1")
try {
class Xs(tag: Tag) extends Table[(Long, String)](tag, "XS") {
def id = column[Long]("ID", O.PrimaryKey)
def name = column[String]("NAME")
override def * : ProvenShape[(Long, String)] = (id, name)
}
class Ys(tag: Tag) extends Table[(Long, String)](tag, "YS") {
def id = column[Long]("ID", O.PrimaryKey)
def name = column[String]("NAME")
override def * : ProvenShape[(Long, String)] = (id, name)
}
val xs = TableQuery[Xs]
val ys = TableQuery[Ys]
val setupAction = DBIO.seq((xs.schema ++ ys.schema).create)
val importAction = DBIO.sequence((1L to 100000L).iterator.map { x =>
if (x % 2 == 0) {
xs +=(x, x.toString)
} else {
ys +=(x, x.toString)
}
})
val f = db.run((setupAction andThen importAction))
Await.result(f, Duration.Inf)
} finally {
db.close
}
The problem was caused by the not very efficient implementation in the RC1 version of Slick. The implementation has been improved as seen in the issue thread here.
In the Slick 3 RC 3 the problem is solved :)

Null value comparision in slick

I encounter a problem of nullable column comparison.
If some columns are Option[T], I wonder how slick translate like === operation on these columns to sql.
There are two possibilities: null value( which is None in scala ) and non-null value.
In the case of null value, sql should use is instead of =.
However slick doesn't handle it correctly in the following case.
Here is the code (with H2 database):
object Test1 extends Controller {
case class User(id: Option[Int], first: String, last: String)
class Users(tag: Tag) extends Table[User](tag, "users") {
def id = column[Int]("id",O.Nullable)
def first = column[String]("first")
def last = column[String]("last")
def * = (id.?, first, last) <> (User.tupled, User.unapply)
}
val users = TableQuery[Users]
def find_u(u:User) = DB.db.withSession{ implicit session =>
users.filter( x=> x.id === u.id && x.first === u.first && x.last === u.last ).firstOption
}
def t1 = Action {
DB.db.withSession { implicit session =>
DB.createIfNotExists(users)
val u1 = User(None,"123","abc")
val u2 = User(Some(1232),"123","abc")
users += u1
val r1 = find_u(u1)
println(r1)
val r2 = find_u(u2)
println(r2)
}
Ok("good")
}
}
I print out the sql. It is following result for the first find_u.
[debug] s.s.j.J.statement - Preparing statement: select x2."id", x2."first", x2."last" from "users" x2 where (
(x2."id" = null) and (x2."first" = '123')) and (x2."last" = 'abc')
Notice that (x2."id" = null) is incorrect here. It should be (x2."id" is null).
Update:
Is it possible to only compare non-null fields in an automatic fashion? Ignore those null columns.
E.g. in the case of User(None,"123","abc"), only do where (x2."first" = '123')) and (x2."last" = 'abc')
Slick uses three-valued-logic. This shows when nullable columns are involved. In that regard it does not adhere to Scala semantics, but uses SQL semantics. So (x2."id" = null) is indeed correct under these design decisions. To to a strict NULL check use x.id.isEmpty. For strict comparison do
(if(u.id.isEmpty) x.id.isEmpty else (x.id === u.id))
Update:
To compare only when the user id is non-null use
(u.id.isEmpty || (x.id === u.id)) && ...

Slick and leftjoin : wrong type on result tuple

I'm trying to do a left join with Slick.
I have two case classes (Book and Author) and 2 tables
If I do this :
(for {
(book, author) <- Books leftJoin Authors on (_.authorId === _.id)
} yield (book, author.?)
).list
The result is a List[Book, Option[Authors.type]] and I need a List[Book, Option[Author]]
Do you know why I get the wrong type with my query?
Note : my Authors object links well with the Author case class:
object Authors extends Table[Author]("author"){...}
Thanks :)
Loïc
I have a workaround, but I don't know if it's the right solution.
I retrieve the authors optional columns and then I use the map function on my results to create the (Book, Author) tuple objects.
val query = for {
(book, author) <- Books leftJoin Companies on (_.authorId === _.id)
} yield (book, author.id.?, author.name.?)
val result = query.list.map(row => (row._1, row._2.map(value => Author(Option(value), row._3.get))))

Resources