A project that I'm currently working on makes extensive use of persistent. Instead of persistent's quasi-quoted syntax to specify models, I would like to use json. Right now, I use a script to generate the quasiquote which persistent expects using simple-templates. That adds a rather awkward step in the workflow. Can this be avoided using template-haskell?
This is currently generated by the script :
-- File : ProjSpecific.Models
share [mkPersist sqlSettings, mkMigrate "migrateAll"]
[persistLowerCase|
Person
name String
age Int Maybe
deriving Show
BlogPost
title String
authorId PersonId
deriving Show
|]
This is how I would ideally like to do it :
-- File : ProjSpecific.Config
import Data.Aeson.QQ
import Data.Aeson (Value)
models :: Value
models = [aesonQQ| {some json encoding of above models} |]
And
-- File : ProjSpecific.Models
complie time logic to generate the persistent models
Any ideas on how this can be done or is there a better way to accompilsh what I'm trying to do?
Yes, it should be relatively painless. You'd essentially want to use the quoteExp field from persistLowerCase, which will give you a function of type String -> Q Exp. Use your preprocessor to convert JSON into the expected syntax, and then pass it to the function.
Related
I'm thinking of developing an app to query our influxdb servers, I've looked at the influxdb library doc (https://hackage.haskell.org/package/influxdb-1.2.2/docs/Database-InfluxDB.html) but as far as I understand it, you need to pre-define some data structure or you can't query anything.
I just need to be able to let the user query whatever without having to define some data in the sources first.
I imagine I could define something with a time field and a value field, then use something like "SELECT active as value FROM mem" to force it to fit that. I think that would work, but it wouldn't be very practical if I need to query two fields later on.
Any better solutions I'm not seeing ? I'm still very much a beginner in Haskell, I'd appreciate any tips.
EDIT:
Even that doesn't work, since apparently it's missing the "String" constructor in that bit :
:{
data Test = Test { time :: UTCTime, value :: T.Text }
instance QueryResults Test where
parseResults prec = parseResultsWith $ \_ _ columns fields -> do
time <- getField "time" columns fields >>= parseUTCTime prec
String value <- getField "value" columns fields
return Test {..}
:}
I copied that from the doc and just changed the fields, not sure where the "String" is supposed to be declared.
I don't know what an "InfluxDB" is, but I can read Haskell type signatures, so maybe we can start with something like this:
import Data.Aeson
import qualified Data.Aeson.Types as A
import qualified Data.Vector as V
newtype GenericRawQueryResults = GenericRawQueryResults Value
deriving Show
instance FromJSON GenericRawQueryResults where
parseJSON = pure . GenericRawQueryResults
instance ToJSON GenericRawQueryResults where
toJSON (GenericRawQueryResults v) = v
instance QueryResults GenericRawQueryResults where
parseResults _ val = pure (V.singleton (GenericRawQueryResults val))
Then try your queries.
From what I can guess by reading the library's code, results from an influx DB query arrive at parseResults inside a json object that has a key called "results" that points to an array of objects, each of which has a key called "series" or a key called "error", and the assumption is that you write a parser to turn each element that's pointed to by "series" into whatever type you want to read from the db.
Except that there's even more framework going on there that I'd probably understand if I knew more about what an InfluxDB is and what kind of data it returns.
In any case, this should get you as close to the raw results as the library will let you get. One additional thing you might do is install the aeson-pretty package, remove the deriving Show bit and do:
instance Show GenericRawQueryResults where
show g = "decode " ++ show (encodePretty g)
(Or you can keep the derived Show instance and just apply encodePretty to your query results to display them)
Let's suppose that I have a persistent type and want to project some value from this type:
share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|
User
name Text
email Text
|]
...
getName :: Entity User -> Text
getName (Entity uid vals) = userName vals
The problem is, if I generate lenses for said type, using mkPersist sqlSettings {mpsGenerateLenses = True}, I'll need to add a underscore in the beginning of each projection function or use the lenses getter:
getName :: Entity User -> Text
getName (Entity uid vals) = _userName vals
getName' :: Entity User -> Text
getName (Entity uid vals) = vals ^. userName
Firstly, how can I revert that to the default, userName vals, and add the underscore to use the lenses getter, vals ^. _userName?
Secondly, why is this this way and not the other way around?
Firstly, how can I revert that to the default, userName vals, and add the underscore to use the lenses getter, vals ^. _userName?
Database.Persist.TH does not offer that option (to see what it might look like if it existed, cf. Control.Lens.TH), so, assuming that you won't fork the library over this, there doesn't seem to be a way. (By the way, looking for mpsGenerateLenses in the source will show exactly where the underscores are added.)
Secondly, why is this this way and not the other way around?
Presumably because the library assumes that if you generate the lenses you will use them everywhere instead of the record accessors/labels, including for getting the value of the field. The only cosmetic suggestion I have is that, if the change of writing order from _userName vals to vals ^. userName bothers you, you might prefer using view rather than (^.), as in view userName vals.
I have many "data" which represent sql table in a database.
data User = User { userId :: Int, userName :: String }
data Article = Article { articleId :: Int, articleTitle :: String, articleBody :: String }
-- .......
All of them has the field "id" as a primary key. I wonder, is there any way to get rid of necessity to define it each time for each "data", can I anyhow simplify that? If do this:
class DataTable a where
myId :: Int
it won't change anything, will? I'll still have to define "id" for each data and then implement it for DataTable, in fact it'll make more complex.
Sometimes it's best to separate things.
data Identified a = Identified
{ ident :: !Int
, payload :: a }
Now you can deal with identified things in an entirely uniform way.
In GHC 8.0 with DuplicateRecordFields you can use the same record field name for many data types. This is part of the larger feature of OverloadedRecordFields.
Part 3, MagicClasses, introduces derivable type classes for HasField and UpdateField. This is similar in idea to your DataTable.
If you want a field in your data type then you must declare it in the data type definition. I am not aware of any extensions, other than Template Haskell, which change this.
I've just started off with Persistent from Yesod and have already hit my first roadblock.
share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|
User
email String
createdAt UTCTime Maybe default=CURRENT_TIME
updatedAt UTCTime Maybe default=CURRENT_TIME
deriving Show
|]
u <- insert $ User "saurabhnanda#gmail.com" Nothing Nothing
I'm coming from a Rails background and like the schema design conventions advocated by them. In this particular case, having every table have a created_at and updated_at timestamp. However, is there any way to NOT specify the createdAt and updatedAt fields for every object that will be created?
I don't know much about Persistent, but the usual technique for this kind of thing is to define some parameterized types. For example, you might write
data Dated a = Dated
{ created :: UTCTime
, updated :: UTCTime
, value :: a
}
Then your bare values need not worry about their metadata, and metadata can be handled uniformly across all different types. e.g. you might expose an API like this:
new :: a -> IO (Dated a)
new a = do
now <- getUTCTime
return (Dated now now a)
update :: Dated a -> a -> IO (Dated a)
update old new = do
now <- getUTCTime
return (old { updated = now, value = new })
Then I would suggest exporting the Dated type constructor and all its fields, together with new and update, but not exporting the Dated data constructor. This will ensure that the created and updated fields are maintained correctly.
I have fairly deep urls with IDs and I want to see if I can convert them into something nicer looking. I tried looking into how Slugs are done for Yesod Blog (https://github.com/yesodweb/yesod/wiki/Slugs) but not sure if I know how to translate that to what I am looking for here.
Suppose let's say I want to display Top Fiction Books, I have a resource that looks like this:
/topbooks/bookcategory/#BookCategoryId
If I go to /topbooks/bookcategory/1 I may get Fiction books, If I got to /topbooks/bookcategory/2 I may get Non-fiction, etc.
All my handlers use the #BookCategoryId input parameter in the database queries to get the appropriate records.
Ideally I would like to create a url that looks like: /topbooks/fiction, /topbooks/non-fictionetc. If I create my route as /topbooks/#Text, I can pattern match the string and return a Key back. However, I will have to manually transform it in every handler using #BookCategoryId. Note that the IDs are used as Foreign keys so it makes a bit cumbersome to rely on getBy like how it is done in Slug example.
So I am wondering if there is a better way to do it: Is it possible to define a custom type similar to Slug but instead of just converting values to/from Text / String, actually output IDs? That way I can just use the parameter directly in my queries.
Update:
To clarify given Michael's comment:
I understand we cannot get the IDs without doing a database lookup. In fact for this example, I am ok hard coding the look-up mechanism. I was just trying to see if the PathPiece mechanism will somehow simplify the conversion process.
For example, if something like this worked then it will be fine but of course I will get a type error since I am trying to return a Key when the compiler is expecting BookCategories.
data BookCategories = FICTION | NONFICTION
instance PathPiece BookCategories where
toPathPiece (FICTION) = T.pack "fiction"
toPathPiece (NONFICTION) = T.pack "nonfiction"
fromPathPiece s =
let ups = map toUpper $ T.unpack s
in
case reads ups of
[(FICTION, "")] -> Just $ Key $ PersistInt64 1
[(NONFICTION, "")] -> Just $ Key $ PersistInt64
[] -> Nothing
otherwise -> Nothing
Of course I could just return Just FICTION and unwrap it in my handler. This is not conceptually very different from actually pattern matching on Text directly with a function with a signature Text -> BookCategoryId.
getBookCategoryR :: BookCategoryId -> Handler Html
getBookCategoryR bcId = do
-- Normal use case when IDs are used in the URL
books <- runDB $ selectList [ModelBookCategory ==. bcId] []
If I swtich to Text input
getBookCategoryR :: Text -> Handler Html
getBookCategoryR bc = do
bcId = convertToId (bc) -- This is the line I am trying to avoid everywhere
books <- runDB $ selectList [ModelBookCategory ==. bcId] []
The one line conversion code is what I am trying to avoid. PathPiece has been handling it nicely for id-based-urls and kept the code clean. If there was a way to get Ids returned through some Type magic then it will be great. With limited knowledge of Haskell, I have no idea if it is even feasible.
Hope my question is clearer now.
Thanks!
No, there's no such way to do that, and the reason is simple: without consulting the database, there's no way to know if foo exists as a slug at all and, if it does, which ID it relates to. You'll always have to perform some database action to convert a slug into an ID.
UPDATE I'm still not certain I understand what you're looking for, but the short answer regarding PathPiece is that it only works on pure conversions, nothing which has side effects. If you're looking to write a function like Text -> Handler BookCategoryId, you can certainly do so. And if you really wanted to, you could even abstract this with a typeclass, though I'm not sure if you'll gain anything.
This may be barking up the wrong tree, but here's a short idea that might inspire you a bit: you could creating different newtype wrappers for each textual slug field, and then create a typeclass to convert a textual slug field into the appropriate entity, e.g.:
newtype BookCatSlug = BookCatSlug Text
deriving PathPiece
BookCategory
slug BookCatSlug
title Text
...
UniqueBookCat slug
class Slug slug where
type SlugEntity slug
lookupSlug :: slug -> YesodDB App (Maybe (Entity (SlugEntity slug)))
instance Slug BookCatSlug where
type SlugEntity BookCatSlug = BookCategory
lookupSlug = getBy . UniqueBookCat
lookupSlug404 slug = runDB (lookupSlug slug) >>= maybe notFound return
myHandler slug = do
Entity bookCatId bookCat <- lookupSlug404 slug
Something along these lines should work, but I'm not sure if the "type magic" is worthwhile, since having a helper function and manually passing in the appropriate Unique constructor would be almost as easy for the call site and result in much simpler error messages.