Catch exceptions in Happstack - haskell

I got an exception (used Prelude.head on an empty list) that made all the http-request after that return a 502/505 exception (and the happstack app prints "HTTP request failed with: send resource vanished (Broken pipe)" in stdout).
My question is this: What is the best practice for controlling exceptions in Happstack? Should I use something else than simpleHTTP or simply use Control.Exception.catch on the controller function?
It currently looks similar to the example in the Crash Course:
main :: IO ()
main = do
hSetEncoding stdout utf8
bracket (startSystemState (Proxy :: Proxy AppState)) createCheckpointAndShutdown $
\_control ->
simpleHTTP nullConf { port = 1729 } $ do
_ <- liftIO today
decodeBody policy
controller
where
createCheckpointAndShutdown control = do
createCheckpoint control
shutdownSystem control

What do you mean by, "the server went down"?
If you are handling a Request and do head [], that should only kill the thread that is handling that particular request. The server itself should continue running and processing other requests.
If you have found a way to kill the whole server, that is a bug and bug report / test case would be highly appreciated.
If only the current thread is being killed, and you want to catch that exception, then I think you need to use MonadPeelIO,
http://hackage.haskell.org/packages/archive/monad-peel/0.1/doc/html/Control-Monad-IO-Peel.html
Someone has submitted a patch for that, but it has not been reviewed/merged yet.

Related

handle one connection for all the server request

Here is the following code :
start :: Settings -> IO ()
start settings # Settings {healthCheckLoggerId} = do
waitTillHealthy
healthCheckLoggerId
settings
Server.getDependencies
Server.healthCheck
Server.getDependencies
settings
(runServerOnWarp)
where
runServerOnWarp :: Server.Dependencies -> IO()
runServerOnWarp dependencies # Server.Dependencies {logger,port} = do
run port $ application
(proxy :: Proxy GSDMonitoringStreamingApi)
monitoringServer
dependencies
monitoringServer :: ServantServer GSDMonitoringStreamingApi Server.Dependencies
monitoringServer dependencies = streamCommand dependencies
where
streamCommandResponse :: Server.Dependencies ->
WorkspaceId ->
Handler (PipeStream (Either StreamIssue (Persisted CommandResponse)))
streamCommandResponse Server.Dependencies {eventStoreClientDependencies} =
return . toPipes . GsdMonitoring.streamCommandResponse eventStoreClientDependencies
Explanation of the issue :
the function Server.getDependencies is wrapping a bracket pattern that get a connection to a database (the client of that database recommend using one connection for all not one for each request).
First I'm running a healthCheck which is blocking till everything is fine. Each time I'm testing the health check of a dependency each time I'm acquiring a new connection. Once the environnement is healthy, I'm getting again the Server.Dependencies (db connection), and I'm running the server. Everything running into runServerOnWarp is executed in the -- ^ computation to run in-between section from
bracket
:: IO a -- ^ computation to run first (\"acquire resource\")
-> (a -> IO b) -- ^ computation to run last (\"release resource\")
-> (a -> IO c) -- ^ computation to run in-between
-> IO c -- returns the value from the in-between computation
so all the requests are executed into that section and get the same connection, so far so good... Because that section :
run port $ application
(proxy :: Proxy GSDMonitoringStreamingApi)
monitoringServer dependencies
is never ending, we are always staying in the computation to run in-between (except when I kill the app...)
Now the issue I have is when the connection get closed.... all my request will return Left StreamIssue which is fine but I have to get a new connection from Server.getDependencies. I don't know how to do it properly :-( Hacks that I have in mind but I don't like are :
1) Having one connection for one request
2) Raising an exception when the request stream a Left streamIssue, that somehow restart the server to get new dependencies... The issue is that the client of the server never get a Left value and the communication between the 2 is violently closed because the server is restarted...
Do you have a cleaner way in mind ?
P.S : I hope the details are enough to understand the issue, don't hesitate for more information if necessary...
If losing the database connection is rare, letting the server crash, and some supervisory process restart it, works quite well.
If losing the DB connection is common enough that you can't tolerate client timeouts from restarting the server, you can add reconnect logic to whatever code uses the DB connection. I usually use resource-pool for this. I believe you can create the "pool" with only one resource, if that fits your database.
Usually getting a DB connection is fast, so I'm guessing it's not worth extra effort to route incoming requests elsewhere. withResource in resource-pool blocks until a resource is ready, so active requests can easily wait while you reconnect.

Haskell, Yesod and Keter - How can I run a routine periodically (every 5 minutes)?

There are some database queries I want to run periodically, and according to its state, send notifications to users email and change the state of their accounts. Can I do it within Yesod itself?
I moved from Yesod's issue.
Run Handler code at a specific time within yesod · Issue #1529 · yesodweb/yesod
I do not know your complete code.
So, this is proposal.
makeApplication :: App -> IO Application
makeApplication foundation = do
unsafeHandler foundation $
forkHandler (\_ -> catchError) $ forever $ do -- catchError do not exist
waitUntil10AM -- waitUntil10AM do not exist
getCheckupR
logWare <- makeLogWare foundation
-- Create the WAI application and apply middlewares
appPlain <- toWaiAppPlain foundation
return $ logWare $ (acceptOverride . autohead . gzip def) appPlain
This code point that use unsafeHandler and forkHandler.
waitUntil10AM
I do not know your timezone, environment, database structure etc, so I want you to write the details yourself.
For example, if you put threadDelay in forever and check it once every ten minutes, put the date on which you sent the mail already in the database and call it if you do not send it and it exceeds 10AM.
catchError
Please decide what kind of processing should be done at the time of error.
I would like to handle errors in a way that it would never stop
You can name the function to be passed inside forkHandler and call it again on error.

How can I limit size of request body and headers in WAI?

I am developing an application using Scotty and of course WAI. I would like to be able to limit the size of requests, both for body length and for headers. How can I do that? Is it possible to do it using a plain WAI middleware ?
I don't know details of Scotty, but it's certainly possible to set up a WAI middleware that will look at the requestBodyLength and, if it's too large, return an appropriate 413 status code page. One thing you'd need to deal with is if the upload body is sent with chunked encoding, in which case no content-length is present. but that's uncommon. You have the option of either rejecting those requests, or adding code to wrap the request body and return an error if it turns out to be too large (that's what Yesod does).
The marked solution points in the correct direction, but if you're like me you might still struggle to explicitely derive the full code needed. Here is an implementation (thanks to the help of an experienced Haskell friend):
import qualified Network.HTTP.Types as Http
import qualified Network.Wai as Wai
limitRequestSize :: Wai.Middleware
limitRequestSize app req respond = do
case Wai.requestBodyLength req of
Wai.KnownLength len -> do
if len > maxLen
then respond $ Wai.responseBuilder Http.status413 [] mempty
else app req respond
Wai.ChunkedBody ->
respond $ Wai.responseBuilder Http.status411 [] mempty
where
maxLen = 50*1000 -- 50kB
The middleware then just runs in scotty's do block like this
import Network.Wai.Middleware.RequestLogger (logStdout)
main :: IO ()
main = do
scotty 3000 $ do
middleware logStdout
middleware limitRequestSize
get "/alive" $ do
status Http.status200
-- ...
If you're curious as to how to derive it (or why I found this not overly trivial), consider that Middleware is an alias for
Application -> Application
where Application itself is an alias for
Request -> (Response -> IO ResponseReceived) -> IO ResponseReceived
Hence there are quite a bunch of arguments to (mentally) unpack, even if the solution is pretty terse.
As of wai-extra-3.1.1 the code described above has been added to the Network.Wai.Middleware.RequestSizeLimit module, so it can just be pulled in as a dependency.

Method 'PUT' in Snap

If I specify method="PUT" in a web form and also filter by this method in the route by giving
("/tickets/:id", method PUT updateTicket)
the form never gets to the handler.
The handler never even triggered. As soon as I change method to POST in the route and in the form then everything works as expected.
Any idea why?
Thanks.
Try not limiting your route to a certain method. Then you can do some debugging to see what's really going on:
updateTicket = do
meth <- getsRequest rqMethod
liftIO $ putStrLn $ "Request had method "++(show meth)
...
Then play around with your form and see what's actually happening.

What is the synchronization defect in this Haskell chat code, and what is the fix?

Simon Marlow gave a High performance concurrency talk at Haskell eXchange 2012. Due to time constraints, he skipped the section on a simple concurrent chat server. Curious about the elided content, a web search found similar slides on server applications and an implementation on GitHub.
Slide 33 reads
Back to talk…
talk :: Server -> Handle -> IO ()
talk server#Server{..} handle = do
hSetNewlineMode handle universalNewlineMode
hSetBuffering handle LineBuffering
readName
where
readName = do
hPutStrLn handle "What is your name?"
name <- hGetLine handle
m <- checkAddClient server name handle
case m of
Nothing -> do
hPrintf handle "The name %s is in use" name
readName
Just client -> do
runClient server client
`finally` removeClient server name
Strictly speaking we should plug the hole between checkAddClient and finally (see the notes…)
Earlier, slide 3 mentions “Chapter 14 in the notes,” which I assume refers to his upcoming book. What is the synchronization crack between checkAddClient and finally, and how do we plug it?
The aforementioned implementation uses mask from Control.Exception. If this is the fix, what is a scenario in which an ill-timed exception spoils the party?
...
readName = do
hPutStrLn handle "What is your name?"
name <- hGetLine handle
if null name
then readName
else mask $ \restore -> do
ok <- checkAddClient server name handle
case ok of
Nothing -> restore $ do
hPrintf handle
"The name %s is in use, please choose another\n" name
readName
Just client ->
restore (runClient server client)
`finally` removeClient server name
You want to make sure that every successful checkAddClient is paired with a removeClient. The finally statement at the bottom only guarantees that removeClient is run if the runClient action begins.
However, there is a brief window in between the end of checkAddClient and the beginning of runClient where that code could receive an asynchronous exception. If it did, finally would not get a chance to register the removeClient command. This is the synchronization crack that Simon is referring to.
The solution is to mask asynchronous exceptions by default and only allow them to show up in certain places (i.e. the actions wrapped by restore). This seals up the aforementioned crack.
What if checkAddClient did this:
checkAddClient server name handle = do
addClient server name handle
return undefined
The exception would not be triggered until the case evaluated its argument, and removeClient would never get called.
But, honestly, I don't understand asynchronous exceptions so this is a wild guess at an example.

Resources