How to view the SQL generated by an Opaleye query? - haskell

In the Opaleye Tutorial, line 88 lists the following example ghci command.
ghci> printSql personQuery
personQuery there is a predefined query, and it appears that printSql is a function which prints the SQL generated by it.
Where is this printSql function defined? I have searched through all the modules of the opaleye package and do not see it defined there.
How can I see the SQL generated by an opaleye query?

Related

Issue while converting Scala Map to Object on Databricks Notebook

Issue
I have a scenario where I need to convert a scala Map to a case class object and with help of the following references I was able to achieve it locally (scala version 2.12.13):
Scala: convert map to case class
Convert a Map into Scala object
But when I tried running the same block of code in Databricks notebook it throws an error:
IllegalArgumentException: Cannot construct instance of '$line23851bc084ae4df7a16bf9c475868d9265.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$Test' (although at least one Creator exists): can only instantiate non-static inner class by using default, no-argument constructor at [Source: UNKNOWN; line: -1, column: -1]
Cluster configuration: Databricks runtime 8.2(Includes Spark 3.1.1, Scala 2.12). Please refer to the screenshot for the complete code.
Workaround (not advisable):
def workaround(map: Map[String, Any]): Test = {
Test(
map("k1").asInstanceOf[Int],
map("k2").asInstanceOf[String],
map("k3").asInstanceOf[String],
)
}
val result = workaround(myMap)
Any thoughts on how to resolve this issue?
This smells to me like one of two possibilities.
First, we should double check that there is no version mis-matching between something in your local runtime environment and the databricks runtime environment. You can check this page for a list of all library versions included in DBR 8.2. In particular, I would check your local environment to make sure you are running the same version of jackson (2.10.0).
Second, this could be an interaction between how Databricks implements their notebook and a limitation of jackson. Each command of a databricks notebooks is wrapped in a package object that is given a random name. For example, I can tell from the exception that the package object which holds your Test class definition was called $line23851bc084ae4df7a16bf9c475868d9265.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw. (I say that it was named that because each time you re-run the command it will generate a new package object). This implies that all classes (and other kinds of type definitions) are actually path dependent types when put inside a notebook. More specifically for this scenario, your Test class is an inner-class to the package object. based on the error message and some brief reading of docs, I suspect that Jackson cannot serialize path dependent types.

Where can I find the source code of packages/modules?

I would like to look at the code of functions defined in modules, such as Data.List or Data.Map.
I can import Data.List module with
import Data.List
and then I can use the functions nub or sort.
I would like to know where I can find these functions to look at their code.
In which directory are the modules installed by default?
PS: Windows 8.1, I installed Haskell Platform.
That directory contains compiled modules, so you wouldn't be able to read the source there.
What you can do is to find your function in online documentation and then click "Source" on the right.
As #arrowd notes in his answer,
That directory contains compiled modules, so you wouldn't be able to
read the source there.
The GHC repo (and its Github mirror) can be directly browsed, but there is an easier way:
Use Hoogle or Stackage to find the package where the module/function resides
Note that Hoogle and Stackage are case-sensitive. (It's best to look up modules with their capitalized names.)
A query for sort in Hoogle yields a list similar to the one below. Stackage has a slightly different style, but the basics are the same (mostly because it uses Hoogle for lookup). The lines in green under the result headings show the name(s) of the containing
(1) package(s) (in small caps) and
(2) module(s) (capitalized).
There can be multiple functions with the same name, but the module and package name helps to choose the right one.
Click on the function/module name
Click on "#Source"

Error: No implicit view available for String => org.apache.lucene.document.Document

I am trying to use Spark LuceneRDD with Record Linkage concept from the link.
I did all the steps mentioned in the link but I am getting the error
Error: No implicit view available for String => org.apache.lucene.document.Document
I tried by adding lucene jar for spark shell but I am still getting the same error.
Any help is appreciated.
Adding Lucene jars will not help you here. The problem is, that some functionality is using implicit features of Scala. What it means, it should be some mapping function that will transform String into Lucene document.
When I looked over github, I found one implicit thing that will do the conversion - https://github.com/zouzias/spark-lucenerdd/blob/master/src/main/scala/org/zouzias/spark/lucenerdd/package.scala
So, you just need to add import to your code, something like this:
import org.zouzias.spark.lucenerdd._
or even more precisely, if you just need only 1 conversation (could be not your case however)
import org.zouzias.spark.lucenerdd.stringToDocument

Running Hoogle Locally

I want to run hoogle on a project of mine. I successfully generated the hoogle database (a file with .hoo extension) from my project.
But when I run the server locally, hoogle cannot find any of the functions or types that are defined in my project.
It can find some of the prelude functions such as map, but none of the functions that are defined in my project.
hoogle dump my-project.hoo dumps the content with no error.
I also moved my-project.hoo to ~/.cabal/share/x86_64-osx-ghc-7.8.4/hoogle-4.2.38/databases where all the .hoo files reside. No success again.
-verbose switch also does not output any useful information.
Any suggestion is appreciated.
Edit:
Thanks to mhuesch's suggestion, I was able to get the search results. Although, the returned results are not linked to the local hackage documents. Something that I couldn't find anywhere on the web is that the hoogle server looks for a file called default.hoo in the current directory.
Edit 2:
If you, like me, have 5000+ databases (i.e., .hoo files) you may get a "too many open files" error when combining them. The trick is simple: run hoogle combine x*.hoo -o=parts/x.hoo for all x='a' ... 'z' and then run hoogle combine *.hoo -o=default.hoo in the parts folder.
Edit 3:
If you want to link your hoogle search results with local hackage documentation,
use hoogle convert --doc='absolute-path-to-your-doc' your-package-hoogle-doc.txt default.hoo.
I couldn't get relative path working.
Hoogle searches the current directory (where the command hoogle is run) for a database called 'default.hoo', so if you rename your database to that it should find it.
To add it to the database in your cabal directory, I believe this should work (taken from http://newartisans.com/2012/09/running-a-fully-local-hoogle/):
cd {...path to hoogle databases dir...}
mv default.hoo default.hoo-prev
hoogle combine *.hoo
Edit: (in response on Oxy's edits)
My knowledge of default.hoo comes from here. It seems to doesn't seem to be very well know.
hoobuddy (the above linked project), while cool, doesn't seem to address what you want. I think the key to that is in the help of hoogle data
$ hoogle data --help
...
-l --local[=FILEPATH] Use local documentation if available
...
I haven't done it myself so I am not sure. The author of this writeup achieved local documentation linking by compiling hoogle from source and adding his local docs directory. I think that you can avoid that by using hoogle data.

Persistent model types in Fay code

I'm using the Yesod scaffolded site (yesod 1.1.9.2) and spent a few hours yesterday wrapping my head around basic usage of Fay with Yesod. I think I now understand the intended workflow for using Fay to add a chunk of AJAX functionality to a page (I'm going to be a little pedantic here just because someone else might find the step-by-step helpful):
Add a data constructor Example a to SharedTypes.Command.
In the expression case readFromFay Command of ... in Handler.Fay.onCommand, add a case that matches on my new data constructor.
Create a Fay file 'Example.hs' in /fay, patterned after fay/Home.hs. Somewhere in here, use the expression call (Example "foo") $ myFayCallback.
Define a route and handler for the page that will use the Javascript I'm generating. In the handler, use $(fayFile' (ConE 'ScriptR) "Example.hs").
My question: In the current Yesod/Fay architecture, how should I go about sharing my Persistent model types with my Fay code?
Using import Model in a Fay file doesn't work -- when I try to load the page that's using this Fay file, I get an error in the browser (Fay's standard way of alerting me to errors, I guess) indicating that it couldn't find module 'Model' but that it only searched the following directories:
projectroot/cabal-dev//share/fay-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0
projectroot/fay
projectroot/fay-shared
I also tried importing and re-exporting Model in SharedTypes.hs, but that produced the same error.
Is there a way to do this? If not, why not? (I'm a relative noob in both Haskell and Yesod, so the answer to the "why not?" question would be really helpful.)
EDIT:
I just realized that mentioning Persistent in this question's title might be misleading. To be clearer about what I'm trying to do: I just want to be able to represent data in my Fay code using the same datatypes Yesod defines for my models. E.g. if I define a model thusly in config/models...
Foo
bar BarId
textThatCanBeNull Text Maybe
deriving Show
... I want to be able to define an AJAX 'command' that receives and/or returns a value of type Foo and have my Fay code deal in Foos without me having to write any de/serialization code. I understand that I won't be able to use any of Persistent's query functionality directly from my Fay code; I only mentioned Persistent in the title because I mentally associate everything in Model.hs and config/models with Persistent.
This currently is not supported; there are many features leveraged by Persistent which Fay does not support (e.g., Template Haskell). For now, it probably makes sense to have an intermediate data type which is shared by both Fay and Yesod and convert your Persistent data to/from that type.

Resources