How can i access :: Hasql.Pool.Pool inside :: BasicAuthCheck User? - servant

I usually pass :: Hasql.Pool.Pool to serve function as an argument through run as an argument from main :: IO ()
But ever since I started planning to use :: BasicAuthCheck User for authentication, I started needing :: Hasql.Pool.Pool inside :: BasicAuthCheck User which contains the :: BasicAuthData -> IO (BasicAuthResult User).
My current workaround is to create another :: Hasql.Pool.Pool inside the :: BasicAuthData -> IO (BasicAuthResult User) and use it to authenticate users.
Is there a more elegant way than creating another connection pool?

Related

Should I use ReaderT to pass a database connection pool around in Servant?

I am building a web API with Servant and Persistent. I plan to define some API endpoints (about 15) that use a connection pool to access the DB.
For example, one of the endpoint definitions (Handlers) is:
getUser :: ConnectionPool -> Int -> Handler User
getUser pool uid = do
user <- inPool pool $ get (toId #User uid)
user & orErr err404 {errBody = "This user does not exist."}
where inPool is just a lifted withResource function, and orErr is a lifted fromMaybe.
Then, a higher level API definition (Servers) looks like this:
type Point (s :: Symbol) (a :: *) =
s :> Capture "id" Int :>
( Get '[JSON] a
:<|> ReqBody '[JSON] a :> Post '[JSON] NoContent
)
type UserPoint = Point "users" User
userServer :: ConnectionPool -> Server UserPoint
userServer pool uid =
getUser pool uid :<|>
postUser pool uid
And I defined the main to be:
main = runStdoutLoggingT . withPostgresqlPool connectionString numConnections $ \pool -> do
withResource pool (runSqlConn $ runMigration migrateAll)
liftIO $ run appPort (userServer pool)
But I soon noticed that I would have to pass the pool down layer by layer (In the example above there are 2 layers, and in my real project there are 3), to every function (that is over 20). My intuition tells me this is bad smell, but I am not quite sure.
Then I thought of ReaderT, because I think that may abstract the pool out. But my concern is that the introduction of ReaderT may lead to unnecessary complexity:
I need to lift many things manually;
The mental model of types will become more complicated thus harder to think about;
It means I'll have to give up the Handler type, which makes using Servant harder too.
I am not sure whether I should use ReaderT in this case. Please offer some suggestions (I'll be grateful if you could also provide some guidelines about when to use ReaderT or even other monad transformers).
UPDATE: I found that I can use where-clauses to simplify this a lot, and this basically solves my problem. but I'm not sure if this is best practice, so I'm still looking forwand to an answer.
userServer :: Pooled (Server UserPoint)
userServer pool auth = c :<|> rud where
c :: UserCreation -> Handler NoContent
c = undefined
rud uid = r :<|> u :<|> d where
r :: Handler User
r = do
checkAuth pool auth
user <- inPool pool $ get (toId #User uid)
user & orErr err404 {errBody = "This user does not exist."}
u :: User -> Handler NoContent
u = undefined
d :: Handler NoContent
d = undefined
While defining your handlers along with your server will avoid you the parameter-passing, as the server grows in complexity you might want to define some handlers separately:
Perhaps some handler provides some generic functionality and could be useful in other servers.
Defining everything together means everything is aware of everything else.
Moving handlers to the top level, or even to another module, will
help make explicit which parts of the whole they really need to know.
And this can make the handler easier to understand.
Once we separate a handler, supplying it with the environment will become necessary. This can be done with plain parameters to functions, or with a ReaderT. As the number of parameters grows, the ReaderT (often in combination with auxiliary HasX typeclasses) becomes more attractive because it frees you from having to care about parameter order.
I would have to pass the pool down layer by layer (In the example
above there are 2 layers, and in my real project there are 3), to
every function
Besides the extra (possibly inevitable) burden of having to pass parameters, I think there's a potentially worse problem lurking: you are threading a low-level detail (the connection pool) through several layers of functions. This can be bad because:
You are committing your whole application to using an actual database. What happens if, during testing, you want switch it with some kind of in-memory repository?
If you need to change the way you do persistence, the refactor will reverberate through all the layers of your application, instead of remaining localized.
One possible solution for these problems: the functions at layer N+1 should not receive as parameter the connection pool, but rather the functions they use from layer N. And those functions from layer N will already have been partially applied with the connection pool.
A trivial example: if you have some high level logic transferUser :: Conn -> Handle -> IO () that includes hardwired calls to functions readUserFromDb :: Conn -> IO User and writeUserToFile :: Handle -> User -> IO (), change it into a transferUser :: IO User -> (User -> IO) -> IO ().
Notice that the auxiliary functions from level N could be stored in the ReaderT context; the functions from level N+1 could get them from there.
It means I'll have to give up the Handler type, which makes using
Servant harder too.
You can define your server using a ReaderT transformer over Handler, and then pass it to the hoistServer function which will "whittle it down" to a runnable server:
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeOperators #-}
import Servant
import Servant.API
import Control.Monad.Trans.Reader
type UserAPI1 = "users" :> Capture "foo" Int :> Get '[JSON] Int
data Env = Env
-- also valid type
-- server1 :: Int -> ReaderT Env Handler Int
server1 :: ServerT UserAPI1 (ReaderT Env Handler)
server1 =
\ param ->
do _ <- ask
return param
-- also valid types:
-- server2 :: ServerT UserAPI1 Handler
-- server2 :: Int -> Handler Int
server2 :: Server UserAPI1
server2 = hoistServer (Proxy :: Proxy UserAPI1) (flip runReaderT Env) server1

Combining Persistents SqlPersistT with Servants Handler

I'm trying to define a helper function that will allow me to run queries using a connection pool provided when run in the Reader monad. I would like to define a function that will abstract away the asks part of fetching the pool, like so:
type Config = Config { getPool :: ConnectionPool }
type App = ReaderT Config
runQuery :: MonadBaseControl IO m => SqlPersistT m a -> App m a
runQuery query =
asks getPool >>= lift . runSqlQuery
Though I am having trouble getting this to compile... I could have sworn this code had worked on a previous project, and low and behold it had. When I went to look at the definitions of Handler (the m I want in App m), and runSqlQuery they both differ. It seems that Handler has an instance for MonadBaseControl IO m whereas runSqlQuery uses MonadUnliftIO, however, looking back to the change prior to when MonadUnliftIO was added, runSqlQuery used MonadBaseControl IO m... which I presume is why my previous code worked in this instance.
So I suppose my question is, how can I change my implementation of runQuery to support the fact that runSqlQuery is based off of MonadUnliftIO now? I had a look through the functions MonadUnliftIO provides (namely unliftIO) but I have been unable to figure out a way to make the types fit.
It seems this may be an intentional design descision of MonadUnliftIO, looking at the readme here https://www.stackage.org/package/unliftio under 'Limitations' it states that it does not support "Transformers with multiple exit points (e.g., ExceptT)", and Servant's Handler is a wrapper around ExceptT. I may just need to find another way of writing this code.
This compiles with persistent-2.8.2:
runQuery :: MonadUnliftIO m => ReaderT SqlBackend m b -> App m b
runQuery query =
asks getPool >>= lift . runSqlPool query

Haskell: carry out an IO action wrapped in a Data.Dynamic

Suppose I have a Data.Dynamic.Dynamic object which wraps an IO action (that is, something of type IO a for some perhaps-unknown a). I feel like I should be able carry out this IO action and get its result, wrapped in a Dynamic (which will have type a). Is there a standard library function which does this? (Something like dynApply, but for IO action performance instead of function application.)
The implementation of the function would perhaps look something like
dynPerform :: Dynamic -> Maybe IO Dynamic
dynPerform (Dynamic typ act)
= if (typeRepTyCon typ) /= ioTyCon then Nothing else Just $
do result <- (unsafeCoerce act :: IO Any)
return Just . Dynamic (head $ typeRepArgs typ) $ result
exampleIOAction = putChar
typeOfIOAction = typeOf exampleIOAction
ioTyCon = typeRepTyCon typeOfIOAction
but obviously this is uses several unsafe operations, so I'd rather pull it in from a library. (In fact, what I've written wouldn't work outside Data.Dynamic because of the opacity of the type Data.Dynamic.Dynamic.)
I don't believe you can safely do what you are trying to do. Let me suggest an alternative approach.
Perhaps phantom types can help you here. Suppose you are providing some sort of cron job service, where the user has you perform an action every x microseconds, and the user can query at any time to see the result of the most recent run of that action.
Suppose you yourself have access to the following primitives:
freshKey :: IO Key
save :: Key -> Dynamic -> IO ()
load :: Key -> IO (Maybe Dynamic)
You should schedule the jobs and make a plan to store the results while you still "know" in the type system what type the action is.
-- do not export the internals of PhantomKey
data PhantomKey a = PhantomKey {
getKey :: Key
getThread :: Async ()
}
-- This is how your user acquires phantom keys;
-- their phantom type is tied to the type of the input action
schedule :: Typeable a => Int -> IO a -> IO (PhantomKey a)
schedule microseconds m = do
k <- freshKey
let go = do
threadDelay microseconds
a <- m
save k (toDyn a)
go
thread <- async go
return $ PhantomKey k thread
unschedule :: PhantomKey a -> IO ()
unschedule pk = cancel (getThread pk)
-- This is how your user uses phantom keys;
-- notice the function result type is tied to the phantom key type
peekLatest :: PhantomKey a -> IO (Maybe a)
peekLatest pk = load (getKey pk) >>= \md -> case md of
Nothing -> return Nothing -- Nothing stored at this key (yet?)
Just dyn -> case fromDynamic dyn of
Nothing -> return Nothing -- mismatched data type stored at this key
-- hitting this branch is probably a bug
Just a -> return (Just a)
Now if I'm a user of your API, I can use it with my own data types that you know nothing about, as long as they're Typeable:
refreshFoo :: IO Foo
main = do
fooKey <- schedule 1000000 refreshFoo
-- fooKey :: PhantomKey Foo
mfoo <- peekLatest fooKey
-- mfoo :: Maybe Foo
So what have we accomplished?
Your library is taking in a user IO action, and performing it at arbitrary points in time
Your library is saving your user's data via Dynamic blobs
Your library is loading your user's data via Dynamic blobs
All this without your library knowing anything about your user's data types.
It seems to me that if you are putting something which you know is an IO action into a Dynamic blob, you have lost information in the type system about that thing in a context when you should have instead made use of said type information. TypeRep can get you type information at the value level, but (as far as I know) cannot bubble that information back up into the type level.

Hasql's session and IO

I wrote a function
app :: Request -> H.Session H.Postgres IO Response
which accepts web requests and builds responses (consulting the database as needed). To actually send out the responses I made a wrapper
runApp :: H.Postgres -> H.SessionSettings -> Application
runApp pg sess req respond =
respond =<< H.session pg sess (app req)
I pass this function to Warp’s runSettings to loop forever and handle requests:
runSettings appSettings $ runApp pgSettings sessSettings
However this is really bad because it creates a new session for every request which defeats the purpose of the connection pool and prepared statements.
I would like to call runSettings inside H.session rather than the other way around. However runSettings has a signature Settings -> Application -> IO () and once inside IO I have lost access to the session. Is there a way to get back inside Session b m r?
This is a repost of a question from a private email.
Yes, in your example you create a new session for every request, which is unacceptable.
First of all, Session is just and alias to the reader monad transformer, which gives you a direct access to the pool. So you can always do:
session postgresSettings sessionSettings $ do
-- session' :: H.Session b m r -> m r
session' <- flip runReaderT <$> ask
let runApp request respond =
respond =<< session' (app request)
liftIO $ do
-- run warp here
Secondly, ReaderT has a MonadBaseControl instance, which is intended for similar patterns.

Haskell: Making Snap and LevelDB play nice

I am using the LevelDB library and Snap framework together. I have:
main :: IO ()
main = runResourceT $ do
db <- open "thedb" defaultOptions { createIfMissing = True }
liftIO $ serveSnaplet defaultConfig $ initWeb db
Now in my handler, I'm unsure how to get back to MonadResource IO in order to query the database:
handleWords :: Handler App App ()
handleWords = do
words <- uses thedb $ \db -> $ get db def "words"
writeBS $ pack $ show words
Which gives me a: No instance for (MonadResource IO) arising from a use of 'get'
Any ideas? I feel like I'm missing something about how to properly create a monad "stack". Thanks
MonadResource/ResourceT is one way of acquiring scarce resources in a way that guarantees resources will be freed in the case of an exception. Another approach is the bracket pattern, which is supported by Snap via the bracketSnap function. You can use this to create the ResourceT context needed by LevelDB:
import qualified Control.Monad.Trans.Resource as Res
bracketSnap Res.createInternalState Res.closeInternalState $ \resState -> do
let openAction = open "thedb" defaultOptions { createIfMissing = True }
db <- Res.runInternalState openAction resState
This could be made simpler with some changes in Snap and leveldb:
Instead of only providing the open function, which presumes a MonadResource context, there could be a function which returns a Resource value. I'm making this tweak in persistent for the 2.0 release.
Snap could provide support for either MonadResource or the Resource monad (two separate concepts with unfortunately similar names).
Snap doesn't need to support MonadResource or Resource for you to do this. You're doing the monad transformer composition in the wrong direction. A look at the types will help.
serveSnaplet :: Config Snap AppConfig -> SnapletInit b b -> IO ()
runResourceT :: MonadBaseControl IO m => ResourceT m a -> m a
So you're trying to put an IO in the place that a ResourceT is expected. You should approach this the other way around. Put your open "thedb" ... call inside your application's Initializer with a liftIO. But open is a MonadResource, so you need to use the ResourceT instance to get it into an IO. It will look something like this:
app = makeSnaplet "app" "An snaplet example application." Nothing $ do
...
db <- liftIO $ runResourceT $ open "thedb" defaultOptions
Then store the db handle in your App state and you can retrieve it later using Handler's MonadReader or MonadState instances.

Resources