I'd like to be able to stream S3 bucket object contents via Servant as the response body.
I'm having issues getting MonadResource instance missing for Handler:
src/Servant/Streaming/Example.hs:29:3: error:
* No instance for (MonadResource Handler)
arising from a use of `runAWS'
* In a stmt of a 'do' block: runAWS env conduits
In the expression:
do env <- newEnv Discover
runAWS env conduits
In an equation for `server':
server
= do env <- newEnv Discover
runAWS env conduits
|
29 | runAWS env conduits
| ^^^^^^^^^^^^^^^^^^^
I have made a repository to reproduce: https://github.com/domenkozar/servant-streaming-amazonka
servant-streaming-server handles ResourceT for Stream (Of BS.ByteString) (ResourceT IO) () https://github.com/plow-technologies/servant-streaming/blob/master/servant-streaming-server/src/Servant/Streaming/Server/Internal.hs#L77-L79
but since I'm using Amazonka I also need to make sure MonadResource for Handler is taken take in that bracket. It's not clear to me how to do that.
My understanding is that using enter/hoistServer this won't work since resources would be cleaned up too soon (before streaming).
Notes:
This builds upon question asked in 2016: Haskell Servant and streaming
Servant part is implemented via https://github.com/plow-technologies/servant-streaming/pull/2/files
Edits
EDIT: I've since replaced $$ with $$+-
EDIT2: I've resolved Conduit specific errors, now fighting with MonadResource
Solved with
server :: Server API
server = do
env <- newEnv Discover
res <- runInternalState (runAWS env conduits) st
return (res >> liftIO (closeInternalState st))
https://github.com/domenkozar/servant-streaming-amazonka/commit/c5fad78dd7bf733cecb8790035105c819d5f5ae9
Related
I have an API written using the Servant library which connects to a postgres db. The connection string for my db is stored in a configuration file. Everytime I make a request to any endpoint that interacts with the db I have to read the file to get the connection string, this is what im trying to avoid.
Step by step example of what im trying to achieve:
Application starts up.
Contents of the config file are read and bound to some type/object.
I make a request to my endpoint to create an entry in the db.
I read the connection string from the type/object that I bound it to and NOT the config file.
Every subsequent request for the lifetime of the application does not have to read the config file everytime it wants to interact with the database.
Doing this in something like java/c# you would just bind the contents of a file to some POCO which would be added to your DI container as a singleton so it can be referenced anywhere in your application and persist between each api request. If I have 100 requests that ineract with the db, none of those 100 requests would need to read config file to get the connection string as it was already loaded into memory when the app started.
I have thought about using the cache package, but is there an easier way to do something like this without a third party package?
Let's begin with this trivial Servant server:
import Servant
import Servant.Server
type FooAPI = Get '[JSON] Int
fooServer :: Server FooAPI
fooServer = pure 1
Suppose we don't want to hardcode that 1. We could turn fooServer into a function like
makeFooServer :: Int -> Server FooAPI
makeFooServer n = pure n
Then, in main, we could read the n from a file then call makeFooServer to construct the server. Something similar could be done for your database connection.
There's another approach that might be sometimes preferrable. Servant lets you define servers whose handlers live in a monad different from Handler, and then transform them into regular servers (tutorial).
We can write a server in which the handler monad is a ReaderT holding the configuration:
import Control.Monad.Trans.Reader
type RHandler env = ReaderT env Handler
fooServer' :: ServerT FooAPI (RHandler Int)
fooServer' = do
n <- ask
pure n
Where ServerT is a more general form of Server that lets you specify the handler monad in an extra type argument.
Then, we use the hoistServer function to supply the initial environment and go back to a regular server:
-- "Server FooAPI" is the same as "ServerT FooAPI Handler"
-- so the transformation we need is simply to run the `ReaderT`
-- by supplying an environment.
backToNormalServer :: Int -> Server FooAPI
backToNormalServer n = hoistServer (Proxy #FooAPI) (flip runReaderT n) fooServer'
The ServerT FooAPI (RHandler Int) approach has the advantage that you still have a server value that you can directly manipulate and pass around, instead of it being the result of a function.
Also, for some advanced use cases, the environment might reflect information derived from the structure of each endpoint.
There are some database queries I want to run periodically, and according to its state, send notifications to users email and change the state of their accounts. Can I do it within Yesod itself?
I moved from Yesod's issue.
Run Handler code at a specific time within yesod · Issue #1529 · yesodweb/yesod
I do not know your complete code.
So, this is proposal.
makeApplication :: App -> IO Application
makeApplication foundation = do
unsafeHandler foundation $
forkHandler (\_ -> catchError) $ forever $ do -- catchError do not exist
waitUntil10AM -- waitUntil10AM do not exist
getCheckupR
logWare <- makeLogWare foundation
-- Create the WAI application and apply middlewares
appPlain <- toWaiAppPlain foundation
return $ logWare $ (acceptOverride . autohead . gzip def) appPlain
This code point that use unsafeHandler and forkHandler.
waitUntil10AM
I do not know your timezone, environment, database structure etc, so I want you to write the details yourself.
For example, if you put threadDelay in forever and check it once every ten minutes, put the date on which you sent the mail already in the database and call it if you do not send it and it exceeds 10AM.
catchError
Please decide what kind of processing should be done at the time of error.
I would like to handle errors in a way that it would never stop
You can name the function to be passed inside forkHandler and call it again on error.
I am developing an application using Scotty and of course WAI. I would like to be able to limit the size of requests, both for body length and for headers. How can I do that? Is it possible to do it using a plain WAI middleware ?
I don't know details of Scotty, but it's certainly possible to set up a WAI middleware that will look at the requestBodyLength and, if it's too large, return an appropriate 413 status code page. One thing you'd need to deal with is if the upload body is sent with chunked encoding, in which case no content-length is present. but that's uncommon. You have the option of either rejecting those requests, or adding code to wrap the request body and return an error if it turns out to be too large (that's what Yesod does).
The marked solution points in the correct direction, but if you're like me you might still struggle to explicitely derive the full code needed. Here is an implementation (thanks to the help of an experienced Haskell friend):
import qualified Network.HTTP.Types as Http
import qualified Network.Wai as Wai
limitRequestSize :: Wai.Middleware
limitRequestSize app req respond = do
case Wai.requestBodyLength req of
Wai.KnownLength len -> do
if len > maxLen
then respond $ Wai.responseBuilder Http.status413 [] mempty
else app req respond
Wai.ChunkedBody ->
respond $ Wai.responseBuilder Http.status411 [] mempty
where
maxLen = 50*1000 -- 50kB
The middleware then just runs in scotty's do block like this
import Network.Wai.Middleware.RequestLogger (logStdout)
main :: IO ()
main = do
scotty 3000 $ do
middleware logStdout
middleware limitRequestSize
get "/alive" $ do
status Http.status200
-- ...
If you're curious as to how to derive it (or why I found this not overly trivial), consider that Middleware is an alias for
Application -> Application
where Application itself is an alias for
Request -> (Response -> IO ResponseReceived) -> IO ResponseReceived
Hence there are quite a bunch of arguments to (mentally) unpack, even if the solution is pretty terse.
As of wai-extra-3.1.1 the code described above has been added to the Network.Wai.Middleware.RequestSizeLimit module, so it can just be pulled in as a dependency.
Simon Marlow gave a High performance concurrency talk at Haskell eXchange 2012. Due to time constraints, he skipped the section on a simple concurrent chat server. Curious about the elided content, a web search found similar slides on server applications and an implementation on GitHub.
Slide 33 reads
Back to talk…
talk :: Server -> Handle -> IO ()
talk server#Server{..} handle = do
hSetNewlineMode handle universalNewlineMode
hSetBuffering handle LineBuffering
readName
where
readName = do
hPutStrLn handle "What is your name?"
name <- hGetLine handle
m <- checkAddClient server name handle
case m of
Nothing -> do
hPrintf handle "The name %s is in use" name
readName
Just client -> do
runClient server client
`finally` removeClient server name
Strictly speaking we should plug the hole between checkAddClient and finally (see the notes…)
Earlier, slide 3 mentions “Chapter 14 in the notes,” which I assume refers to his upcoming book. What is the synchronization crack between checkAddClient and finally, and how do we plug it?
The aforementioned implementation uses mask from Control.Exception. If this is the fix, what is a scenario in which an ill-timed exception spoils the party?
...
readName = do
hPutStrLn handle "What is your name?"
name <- hGetLine handle
if null name
then readName
else mask $ \restore -> do
ok <- checkAddClient server name handle
case ok of
Nothing -> restore $ do
hPrintf handle
"The name %s is in use, please choose another\n" name
readName
Just client ->
restore (runClient server client)
`finally` removeClient server name
You want to make sure that every successful checkAddClient is paired with a removeClient. The finally statement at the bottom only guarantees that removeClient is run if the runClient action begins.
However, there is a brief window in between the end of checkAddClient and the beginning of runClient where that code could receive an asynchronous exception. If it did, finally would not get a chance to register the removeClient command. This is the synchronization crack that Simon is referring to.
The solution is to mask asynchronous exceptions by default and only allow them to show up in certain places (i.e. the actions wrapped by restore). This seals up the aforementioned crack.
What if checkAddClient did this:
checkAddClient server name handle = do
addClient server name handle
return undefined
The exception would not be triggered until the case evaluated its argument, and removeClient would never get called.
But, honestly, I don't understand asynchronous exceptions so this is a wild guess at an example.
I am developing a game and chose Happstack for the persistence part. I find it quite easy to use, i made a quick example for myself to understand it:
getAllObjects :: MonadIO m => m [Thing]
getAllObjects = do
elems <- query GetObjects
return elems
addAnObject :: (MonadIO m) => Thing -> m ()
addAnObject thing = do update $ AddObject thing
test command = do
control <- startSystemState macidProxy
result <- command
shutdownSystem control
return result
checkpoint = do
control <- startSystemState macidProxy
createCheckpoint control
shutdownSystem control
and everytime i 'test' it, it create an event.file. then i 'checkpoint' and creates a new checkpoint file, it is ok for me, the problem is that the old events files keep growing! i manualy delete everyfile (except last checkpoint and current).
Is there some code im missing from happstack to do the 'delete old things'?
There is no built-in mechanism for purging old event files. Lemmih has talked about adding such facilities to acid-state at some point in time.
EDIT: The darcs version of acid-state now has a function 'createArchive' to archive old log files that are no longer needed to restore the current state.