This seems like something that should be easy but how do I get a pure value out of a query if I am using AcidState's Data.Acid.Memory.Pure module. I guess I can generalize the question to "how do I get any value out of the Update monad?". You see, I'm trying to write a test that does the following run-of-the-mill tasks:
Updates a pure AcidState with an object
Queries that Object out of the state using IxSet
Compares the Queried Object and the one returned by the Update for equivalence.
I need a pure "Bool" from this in order to make integration with test frameworks easy. At first I thought I'd simply use runState from Control.Monad.State but I was mistaken (or just didn't do it right). What should I do?
Since you are using Data.Acid.Memory.Pure, you can use the update, query, and update_ functions from that module (instead of the ones from Data.Acid) to look at the result of an event purely. As with regular, impure acid-state, you don't simply "run the Update and Query monads," you have to convert them to an event first. With Data.Acid.Memory.Pure, that means you simply wrap them with the constructors of Event.
Related
I am trying to write some simple code in haskell where there is a function performs a simple database query. In order to unit test Iam using HUnit but not sure how I can mock the database connection and query response.
Pass the function that performs the database query as a parameter to your code, instead of having it "hardwired". During tests, pass a mock function to the code. In production, pass a function that really performs the database query.
This new function parameter should be at the right level of abstraction. For example, something like
queryDbForClient :: Connection -> SQLString -> ClientId -> IO (Maybe RowData)
is possibly a bad idea, because it still forces the code under test to be aware that things like Connections and SQL strings exist. Something like this
findClientById :: ClientId -> IO (Maybe Client)
would likely be better, because then you are free to pass functions that don't use db-specific Connection types at all. They could, for example, be backed by an in-memory reference.
Notice that you can build findClientById out of queryDbForClient by partial application and mapping the result a little. But this should be the task of some setup code, not of the main code that you want to test.
Once you start passing functions in this way for the sake of testability and configurability, some common issues start to appear. For example, what if I have multiple dependencies? Passing them all as positional parameters is a chore. This might lead to passing them together in a record-of-functions, perhaps using Has-like typeclasses so as not to tie your main code to any particular record.
I'm in the middle of trying to build my first "real" Haskell app, an API using Servant, where I'm using Persistent for the database backend. But I've run into a problem when trying to use Persistent to make a certain type of database query.
My real problem is of course a fair bit more involved, but the essence of the problem I have run up against can be explained like this. I have a record type such as:
data Foo = Foo { a :: Int, b :: Int }
derivePersistField "Foo"
which I am including in an Entity like this:
share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|
Thing
foo Foo
|]
And I need to be able to query for items in my database for which their Foo value has its a field greater than some aMin that is provided. (The intention is that it will actually be provided by the API request in a query string.)
For ordinary queries, say for an Int field Bar, I could simply do selectList [ThingBar >=. aMin] [], but I'm drawing a blank as to what to put in the filter list in order to extract the field from the record and do a comparison with it. Even though this feels like the sort of thing that Haskell should be able to do rather easily. It feels like there should be a Functor involved here that I can just fmap the a accessor over, but the relevant type, as far as I can tell from the documentation and the tutorial, is EntityField Thing defined by a GADT (actually generated by Template Haskell from the share call above), which in this case would have just one constructor yielding an EntityField Thing Foo, which it doesn't seem possible to make a Functor instance out of.
But without that, I'm drawing a blank as to how to deal with this, since the LHS of a combinator like >=. has to be an EntityField value, which stops me from trying to apply functions to the database value before comparing.
Since I know someone is going to say it (and most of you will be thinking it) - yes, in this toy example I could just as easily make the a and b into separate fields in my database table, and solve the problem that way. As I said, this is somewhat simplified, and in my real application doing it that way would feel unsatisfactory for a number of reasons. And it doesn't solve my wider question of, essentially, how to be able to do arbitrary transformations on the data before querying. [Edit: I have now gone with this approach, in order to move forward with my project, but as I said it's not totally satisfactory, and I'm still waiting for an answer to my general question - even if that's just "sorry it's not possible", as I increasingly suspect, I would appreciate a good explanation.]
I'm beginning to suspect this may simply be impossible because the data is ultimately stored in an SQL database and SQL simply isn't as expressive as Haskell - but what I'm trying to do, at least with record types (I confess I don't know how derivePersistField marshals these to SQL data types) doesn't seem too unreasonable so I feel I should ask if there are any workarounds for this, or do I really have to decompose my records into a bunch of separate fields if I want to query them individually.
[If there are any other libraries which can help then feel free to recommend them - I did look into Esqueleto but decided I didn't need it for this project, although that was before I ran into this problem. Would that be something that could help with this kind of query?]
You can use the -ddump-splices compiler flag to dump the code being generated by derivePersistField (and all the other Template Haskell calls). You may need to pass -fforce-recomp, too, if ghc doesn't think the file needs to be recompiled.
If you do, you'll see that the method persistent uses to marshal the Foo datatype to and from SQL is to use its read and show instances to store it as a text string. (This is actually explained in the documentation on Custom Fields.) This also means that a query like:
stuff <- selectList [ThingFoo >=. Foo 3 0] []
actually does string comparison at the SQL level, so Thing (Foo 10 2) wouldn't pass through this filter, because the string "Foo 10 2" sorts before "Foo 3 0".
In other words, you're pretty much out of luck here. Custom fields created by derivePersistField aren't really meant to be used for anything more sophisticated than the example from the Yesod documentation:
data Employment = Employed | Unemployed | Retired
The only way you can examine their structure would be to pass in raw SQL to parse the string field for use in a query, and that would be much uglier than whatever you're doing now and presumably no more efficient than querying for all records at the SQL level and doing the filtering in plain Haskell code on the result list.
How can I get esqueleto to generate an SQL string from a from statement?
The documentation of toRawSql says that "you may just turn on query logging of persistent". I tried all possible forms of MonadLogger that I could understand, but it never printed any SQL. The same documentation also says "manually using this function ... is possible but tedious". However, no constructors of the type, nor any functions returning values of the type, QueryType are exported. I managed to get around this by noticing that QueryType is a newtype and using unsafeCoerce!
I was also forced to provide a Connection (which I got via SQLite) even though there should be no need to connect to a database to generate the SQL.
This is what I've got. There must be a better way.
withSqliteConn ":memory:" $
\conn -> return $ toRawSql SELECT
(unsafeCoerce ((const mempty)
:: a -> Text.Lazy.Builder.Builder))
(conn, initialIdentState) myFromStatement)
http://hackage.haskell.org/package/esqueleto-1.3.4.2/docs/Database-Esqueleto-Internal-Sql.html
In the time since this question was posted, esqueleto has gone through a number of major revisions. As of version 2.1.2, and several earlier versions, the QueryType a parameter that necessitated your unsafeCoerce has been removed from toRawSql; that major wart is no longer necessary.
As currently implemented, a Connection is required. I believe that, as indicated by the type synonym name, IdentInfo, esqueleto uses this to build identifiers in the query. It may, for example, add the database name. I haven't really plumbed the source in enough depth. Suffice it to say, passing a fake connection (i.e. undefined) doesn't work; I don't know if a mock connection could be implemented. Your solution seems workable.
The rest of your solution should work fine. Since toRawSql is explicitly an internal function, the API here seems reasonable. Although others note that it "should" be possible to generate a connection-neutral string, that appears outside the scope of toRawSql.
You mention that you couldn't use MonadLogger as recommended. What did you try, and what happened?
I am trying to get my head around compiled splices. With previouse help I can compile and render some usefull results. I don't fully understand the way it works though.
In interpreted mode, the algorithm is simple: construct root, call handler function given the mapped url, pull data from DB, construct and bind splices out of pulled data, insert them into heist and call the apropriate template.
It is all upside down in compiled mode. I map url directly to cRender and don't call a handler. So I assume all the splice constructing and data processing functions are called at load time.
So my question is when is the database called? Does this happen at load time too?
It is just the sequence of events that I don't understand.
Since splice construction is independent of a particular template rendering, does this mean the splice binding tags are unique accross the whole application?? Are they like global variables?
Thanks
Yes, you are pretty much correct. Although I wouldn't say they are like global variables. They are more like global constants, or a global API. I view compiled splices as an API that your web designer can use to interact with dynamic data.
Compiled splices allow you to insert holes into your markup that get filled with data at runtime. At load time the running monad is HeistT n IO. But at run time the running monad is RuntimeSplice n. So if you're looking at the compiled Heist API, it's very easy to see where the runtime code like database functions need to be: in the RuntimeSplice n monad.
I have an IndexedTraversal (from the Control.Lens package) and I would like to apply an index-aware monadic action to each element in it. Unfortunately, all of the convenient ways that I see of doing something like this --- such as ^! combined with act function --- seem to ignore the index with each element. Is there a nice way to run an action for every element (and its index) in an indexed traversal?
Does imapMOf work? You would use it as imapMOf someIndexedTraversal actionWithIndex dataStructure I think.
If you just need to perform an action, there's also imapMOf_ in Control.Lens.Fold.
I haven't used indexed traversals much, but I find the API a bit confusing. Most of the time I use lenses with either ^. or ^!, but for indexed traversals it seems the usual way is to use one of the special index-aware functions, which seems a bit different.