I have a Yesod app with a warp server, and some of its functionality depends on async exceptions. Recently there arose a need to migrate it to https, which I did with the package warp-tls. However now I can't kill the warp thread by throwing ThreadKilled exceptions at it, the throwTo function just hangs and does nothing.
Consider the following example. Here we monitor the state of warp thread with MVar (), which is empty while the thread is running, and has a () value when the thread is killed.
import MyApplication (waiPage)
-- waiPage :: Application
runWai :: MVar () -> IO ()
runWai finishVar = bracket
(return ())
(const $ putMVar finishVar ())
(const runApp)
where
-- change this to normal or tls to check
runApp = runAppTls
runAppNormal = runSettings warpSettings waiPage
runAppTls = runTLS siteTlsSettings warpSettings waiPage
--
warpSettings = setPort 8080 defaultSettings
siteTlsSettings = tlsSettings "cert.pem" "key.pem"
main :: IO ()
main = do
finishVar <- newEmptyMVar
thread_id <- forkIO $ runWai finishVar
-- Try to kill warp thread. Fork because throw might hang
forkIO $ throwTo thread_id ThreadKilled
threadDelay (2 * 10^6) -- microseconds to seconds
isAlive <- isEmptyMVar finishVar
if isAlive then putStrLn "Couldn't kill warp thread"
else putStrLn "Succesfully killed warp thread"
-- Wait for forked warp thread to finish
readMVar finishVar
When you have runApp = runAppNormal, you will get Succesfully killed warp thread message, and the application will exit.
When you have runApp = runAppTls, you will get the Couldn't kill warp thread message, and the app will hang and keep serving.
So how do I get rid of this exception-intercepting behavior? Or at Least is there a way to kill the warpTls thread in any other way?
It turned out to be a Windows-only bug in the version I used which is warp-tls == 3.2.4 that got fixed in a later version. I've looked at the fix and it's a function with the name windowsThreadBlockHack, so if anyone is stuck with a bit outdated warp, you can backport this fix for you too.
Related
I'm writing an app using yesod-websockets, and whenever I receive a "Start" message, I need to run a thread that continuously produces data and sends it to the client, until the client tells it to stop. The producer may also stop producing data on its own.
When the producer stops (regardless of whether it stopped producing on its own, or the client told it to stop), it goes back to the main loop and waits for another "Start" message to be received.
The code looks something like this (runnable minimal repro below):
wsApp :: WebSocketsT Handler ()
wsApp =
forever $ do
msg <- receiveMsg
case msg of
StartMsg -> do
race_
(produceData)
(whileM ((/= StopMsg) <$> receiveMsg))
The problem is that if produceData stops on its own, then the thread running receiveMsg will be cancelled, and this causes the websocket connection to be closed.
21/Mar/2018:08:21:06 +0000 [Error#yesod] Exception from Warp: ConnectionClosed #(app-0.0.0-5bzI9Onrk2fFepGGsdocDz:Application src/Application.hs:122:15)
Is there a way to cancel a thread that's listening on the connection, without killing the connection?
Here's a minimal repro:
wsApp :: WebSocketsT Handler ()
wsApp = forever $ do
race_
(receiveData :: WebSocketsT Handler Text)
(pure ())
$logDebug "Trying again"
The second thread will finish, and the first one will be cancelled, causing the connection to be killed.
Since I couldn't find a more graceful solution, I ended up using an IORef Bool to synchronize the websockets thread and the producer thread.
wsApp :: WebSocketsT Handler ()
wsApp = do
producing <- newIORef False
forever $ do
msg <- receiveMsg
case msg of
StartMsg -> do
whenM (not <$> readIORef producing) $ do
atomicWriteIORef producing True
void . async $ produceData producing
StopMsg -> atomicWriteIORef producing False
produceData :: IORef Bool -> WebSocketsT Handler ()
produceData producing =
whenM (readIORef producing) $
case produce of
Nothing -> atomicWriteIORef producing False
Just x -> sendMsg x >> produceData producing
I'm trying to run a rabbitmq background process on heroku to pick tasks off a queue and process them. I'm working with the AMQP haskell library and they give the following example (parts omitted or brevity.
main = do
--setup connection omitted
--connect to queue, wait for messages
consumeMsgs chan "myQueue" Ack myCallback
--halts process so messages are taken off the queue until a key is presseed
getLine -- wait for keypress
closeConnection conn -- close connection after key
putStrLn "connection closed"
This works fine locally because getLine keeps the process running until you press a key. However, when I deploy this to heroku the process exits with
2016-04-19T08:37:23.373087+00:00 app[worker.1]: worker: <stdin>: hGetChar: end of file
I figured out from the accepted answer to this question that this is because in order to deploy a background process via ssh you need to redirect /dev/null/ to stdin which sends an EOF signal to the process.
In our case the getLine function exits because of this signal and the entire process stops, preventing our worker from staying up.
How can I keep this worker running when I deploy?
EDIT: Final Solution
Using #carstons comments I ended up with the following implementation that worked:
main :: IO ()
main = do
mvar <- newEmptyMVar
conn <- setupConnection
queueName <- pack <$> getEnv "QUEUE_NAME"
chan <- openChannel conn
consumeMsgs chan queueName Ack processMessage
installHandler sigINT (Catch (cleanupConnection conn mvar)) Nothing
putStrLn "Running forever, press ctrl c to exit"
-- this blocks until sigint is recieved and the handler for SIGINT
-- "fills" the mvar. once that is filled the process exits
run <- takeMVar mvar
case run of
_ -> return ()
mixpanelConfig :: IO Config
mixpanelConfig = liftM2 Config (ApiToken . pack <$> getEnv "MIXPANEL_API_TOKEN") (newManager tlsManagerSettings)
cleanupConnection :: Connection -> MVar () -> IO ()
cleanupConnection conn mvar = do
closeConnection conn
putStrLn "SIGINT received.. closing rabbitmq connection"
putMVar mvar ()
processMessage :: (Message, Envelope) -> IO ()
as I pointed out in the comment if you just want to keep it running forever you can use forever with - for example - threadDelay:
import Control.Concurrent (threadDelay)
import Control.Monad (forever)
main = do
--setup connection omitted
--connect to queue, wait for messages
consumeMsgs chan "myQueue" Ack myCallback
--halts process so messages are taken off the queue forever
forever $ threadDelay 10000
-- so this will never happen and you could remove it
closeConnection conn -- close connection after key
putStrLn "connection closed"
note that this will of course never really close the connection or exit the application - you'll have to kill the process.
the alternative would be a bit more involved as you need some message/way to send your program a termination signal.
An easy way is to use MVars which you could set in your myCallback when a certain stop-message was received on your queue:
import Control.Concurrent.MVar
main = do
-- MVar to receve the quit-signal
quitSignal <- newEmptyMVar
--setup connection omitted
--connect to queue, wait for messages - the callback should
--set the quitSignal with putMVar quitSignal ()
consumeMsgs chan "myQueue" Ack (myCallback quitSignal)
--halts process so messages are taken off the queue till quitSignal
takeMVar quitSignal
-- so this will happen after quitSignal was set
closeConnection conn -- close connection after key
putStrLn "connection closed"
I'm trying to write a simple shell in Haskell, but I cant get the signal handling to work. If no command is running, sending SIGINT to the shell process triggers the signal handler. But when a blocking call to getProcessStatus is made, the signal is ignored. Sending a signal immediately to the child process of course kills the child and makes the blocking call return.
Replacing the blocking call with Control.Concurrent.threadDelay does not prevent the signal, i.e., everything works as intended. Replacing the blocking flag to getProcessStatus with False makes the function return before the child process has finished.
Reference to process package: https://hackage.haskell.org/package/unix-2.7.1.0/docs/System-Posix-Process.html#v:getProcessStatus
The relevant code is below, see the (only) commented line.
main :: IO ()
main = do
pidRef <- (newIORef [] :: IO (IORef [ProcessID]))
setSigHant pidRef
doPrompt pidRef
printPrompt :: IO ()
printPrompt = fdWrite stdError "λ➔ " >> return ()
doPrompt :: IORef [ProcessID] -> IO ()
doPrompt pidRef = do
printPrompt
tryLine <- try getLine :: IO (Either SomeException String)
case tryLine of
Left _ -> do
putStrLn ""
exitSuccess
Right line -> do
tryCl <- try (parse line) :: IO (Either SomeException [Command])
case tryCl of
Left e -> fdWrite stdError (show e ++ "\n") >> return ()
Right cl ->
if length cl > 0 && (cmd . head) cl == "cd" then
cd (head cl)
else do
execCommands pidRef cl (stdInput, stdOutput)
pids <- readIORef pidRef
-- This call to getProcessStatus blocks the signals
_ <- sequence $ map (getProcessStatus True False) pids
_ <- writeIORef pidRef []
return ()
doPrompt pidRef
setSigHant :: (IORef [ProcessID]) -> IO ()
setSigHant pidRef = do
let handler = Catch (sigIntHandler pidRef)
installHandler sigINT handler Nothing
return ()
sigIntHandler :: (IORef [ProcessID]) -> IO ()
sigIntHandler pidRef = do
pids <- readIORef pidRef
sequence_ $ map (signalProcess sigINT) pids
fdWrite stdError "\n"
printPrompt
getProcessStatus uses an interruptible FFI call internally. But why is -threaded necessary?
This blog post about handling ctrl-c in Haskell suggests that signal handling is done in a separate thread that kills the main thread using an asynchronous exception:
When the user hits Ctrl-C, GHC raises an async exception of type
UserInterrupt on the main thread. This happens because GHC installs an
interrupt handler which raises that exception, sending it to the main
thread with throwTo.
But the documentation for the async package mentions that:
Different Haskell implementations have different characteristics with
regard to which operations block all threads.
Using GHC without the -threaded option, all foreign calls will block
all other Haskell threads in the system, although I/O operations will
not. With the -threaded option, only foreign calls with the unsafe
attribute will block all other threads.
So maybe that's why proper handling of SIGINT in presence of interruptible ffi calls requires -threaded: otherwise, the thread that throws the asynchronous exception will be prevented from running.
I'm writing some code with reactive-banana and gtk2hs that needs to read from a file handle. I need to have at least two threads (one to read keyboard events with reactive banana and one to read from the file handle), so at the moment I have code that looks something like this:
type EventSource a = (AddHandler a, a -> IO ())
fire :: EventSource a -> a -> IO ()
fire = snd
watch :: EventSource ByteString -> Handle -> IO ()
watch textIn pty = forever $
hGetLine pty >>= fire textIn >> threadWaitRead pty
With the following main function:
mainAxn :: IO ()
mainAxn = do
h <- openFile "foo" ReadMode
initGUI
win <- windowNew
txt <- textViewNew
containerAdd win txt
widgetShowAll win
(keyPress, textIn) <-
(,) <$> newAddHandler <*> newAddHandler
network <- setupNetwork keyPress textIn
actuate network
_ <- forkIO $ watch textIn h
_ <- win `on` keyPressEvent $
eventKeyVal >>= liftIO . fire keyPress >> return True
mainGUI
and my event network set up as follows:
setupNetwork :: EventSource KeyVal -> EventSource ByteString -> IO EventNetwork
setupNetwork keyPress textIn = compile $ do
ePressed <- fromAddHandler $ addHandler keyPress
eText <- fromAddHandler $ addHandler textIn
reactimate $ print <$> (filterJust $ keyToChar <$> ePressed)
reactimate $ print <$> eText
(except in my actual code, those reactimate calls write to the TextView built in mainAxn). I found that I needed to build with -threaded to make the event network correctly capture both text from textIn and keypresses from keyPress, which caused issues because it's not safe to modify objects from the gtk package concurrently.
At the moment, I have postGUIAsync calls scattered throughout my code, and I've found that using postGUISync causes the whole thing to deadlock --- I'm not sure why. I think it's because I end up calling postGUISync inside of the same thread that ran mainGUI.
It seems like it would be better to run all of the GUI stuff in its own thread and use the postGUI* functions for every access to it. However, when I change the last line of mainAxn to be
forkIO mainGUI
return ()
the program returns immediately when it hits the end of mainAxn. I tried to fix that by using:
forkIO mainGUI
forever $ return ()
but then the gtk GUI never opens at all, and I don't understand why.
What's the right way to do this? What am I missing?
The basic problem here is that, in Haskell, as soon as main exits, the entire program is torn down. The solution is simply to keep the main thread open; e.g.
done <- newEmptyMVar
forkOS (mainGUI >> putMVar done ())
takeMVar done
I've also replaced forkIO with forkOS. GTK uses (OS-)thread-local state on Windows, so as a matter of defensive programming it is best to ensure that mainGUI runs on a bound thread just in case one day you want to support Windows.
Daniel Wagner answered my question as asked, but I got a more informative perspective from the #haskell IRC channel, which I'll post here for future reference.
Rather than jumping through awkward hoops of forking off the GUI thread and putting the main thread to sleep, a better solution is to let the main thread be the GUI thread and deal with the reactive-banana event network in a new thread. I ended up modifying my main function to contain the following:
keyChan <- newChan
_ <- forkIO $ watchKeys keyPress keyChan
_ <- win `on` keyPressEvent $
eventKeyVal >>= liftIO . writeChan keyChan >> return True
where watchKeys is defined as:
watchKeys :: EventSource KeyVal -> Chan KeyVal -> IO ()
watchKeys keyPress chan = forever $
readChan chan >>= fire keyPress
Now I can deal with the postGUI(A)Sync issues in exactly one place, by defining:
reactimateSafe :: Frameworks t => Event t (IO ()) -> Moment t ()
reactimateSafe = reactimate . fmap postGUIAsync
and using reactimateSafe for any IO action that modifies a GTK object
If a process thread opens an inproc ZMQ socket, and then dies because of some unhandled exception, what happens if the socket is not closed? How bad is this sort of a practice?
To be more specific, I've implemented a very simple message broker very similar to http://zguide.zeromq.org/page:all#Multithreading-with-MQ in Haskell.
The worker thread opens a new socket, and waits in an infinite loop for processing messages.
The socket is not closed anywhere in the worker thread.
Now, if there is an unhandled exception in the worker thread, and the thread dies, how bad is to just restart the thread without caring?
I'm pasting the worker code from the Haskell example:
worker :: ZMQ z ()
worker = do
receiver <- socket Rep
connect receiver "inproc://workers"
forever $ do
receive receiver >>= liftIO . printf "Received request:%s\n" . unpack
-- Suppose there is some exception here
liftIO $ threadDelay (1 * 1000 * 1000)
send receiver [] "World"
So it seems that if you don't close the inproc socket, the restarted thread can't accept messages very well. I'm not sure I understand this behaviour, but I can confirm that this modified example from the ZMQ haskell guide works:
import System.ZMQ3.Monadic
import Prelude hiding (catch)
import Control.Monad.CatchIO
worker :: ZMQ z ()
worker = do
liftIO $ putStrLn "Starting the worker thread..."
receiver <- socket Rep
connect receiver "inproc://workers"
catch
(forever $ do
liftIO $ putStrLn "Waiting for an inproc message"
request <- receiveMulti receiver -- request :: ByteString.Char8
liftIO $ putStrLn "I'm doing something that may throw an error"
-- error "IO Error has happened"
)
(\(e :: IOError) -> do
liftIO $ putStrLn $ "Caught error: " ++ (show e)
close receiver -- Commenting this out will result in the restarted worker thread being unable to accept new messages
)