how to publish in Hedis haskell pubSub - haskell

I'm currently learning Haskell. Now I'm currently not that good in functional programming.
I want to make a piece of code, that get's data from subscribing on a topic in Redis, do some calculation on it and publishing it on a other topic. I'm having trouble with I guess some language specific features.
My current code:
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Database.Redis
import System.IO
main = do
conn <- connect defaultConnectInfo
runRedis conn $ do
pubSub (subscribe ["commands"]) $ \msg -> do
putStrLn $ "Message from " ++ show (msgChannel msg)
publish "results" "Result of a very interesting calculation"
return mempty
Now I receive the error: • No instance for (RedisCtx IO f0) arising from a use of ‘publish’
Putting the publish outside the pubsub will make it work. But I want to publish a result! I can't get any wiser from the documentation. What is it that I'm missing?

Your use of putStrLn is making the typechecker (correctly!) infer that your do block is intended to be in an IO context, then the call to publish requires the context to be an instance of RedisCtx, which IO is not.
Normally in a Redis context, the solution would be to lift the IO action into the Redis context using liftIO :: IO a -> m a from the MonadIO class, as in the example in the documentation:
runRedis conn $ do
set "hello" "hello"
set "world" "world"
helloworld <- multiExec $ do
hello <- get "hello"
world <- get "world"
return $ (,) <$> hello <*> world
liftIO (print helloworld)
MonadIO is the set of types in whose context you can execute IO actions.
However, in this case, it’s the other way around: the functional parameter to pubSub returns an IO action, but publish expects a RedisCtx monad.
It’s not clear to me whether pubSub allows you to make a runRedis call inside the callback, like this, although I think it should typecheck:
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Database.Redis
import System.IO
main = do
conn <- connect defaultConnectInfo
runRedis conn $ do
pubSub (subscribe ["commands"]) $ \msg -> do
putStrLn $ "Message from " ++ show (msgChannel msg)
runRedis conn $ publish "results" "Result of a very interesting calculation"
return mempty
Based on skimming the docs, each runRedis call takes a connection from the connection pool, whose default size is 50; however, if there is no available connection, it blocks, so my concern is that because the docs for pubSub say it’s “single-threaded”, this could deadlock waiting for a connection that won’t be released since you’re in a “nested” runRedis call.
I think the thing I would try next is to use the more flexible pubSubForever API; in the hedis test suite there’s an example of using pubSubForever with separate threads for publishing and handling subscription events.
main = do
ctrl <- newPubSubController [("foo", msgHandler)] []
conn <- connect defaultConnectInfo
withAsync (publishThread conn) $ \_pubT -> do
withAsync (handlerThread conn ctrl) $ \_handlerT -> do
void $ hPutStrLn stderr "Press enter to subscribe to bar" >> getLine
void $ addChannels ctrl [("bar", msgHandler)] []
-- …
-- (Add/remove various subscriptions.)
-- …
publishThread uses runRedis and calls publish:
publishThread :: Connection -> IO ()
publishThread c = runRedis c $ loop (0 :: Int)
where
loop i = do
let msg = encodeUtf8 $ pack $ "Publish iteration " ++ show i
void $ publish "foo" ("foo" <> msg)
-- …
liftIO $ threadDelay $ 2*1000*1000
loop (i+1)
handlerThread uses pubSubForever:
handlerThread :: Connection -> PubSubController -> IO ()
handlerThread conn ctrl = forever $
pubSubForever conn ctrl onInitialComplete
`catch` (\(e :: SomeException) -> do
hPutStrLn stderr $ "Got error: " ++ show e
threadDelay $ 50*1000)
This is wrapped in a call to forever to resubscribe if the connection is lost, per the docs for pubSubForever:
[…] if the network connection to Redis dies, pubSubForever will throw a ConnectionLost. When such an exception is thrown, you can recall pubSubForever with the same PubSubController which will open a new connection and resubscribe to all the channels which are tracked in the PubSubController.
This test uses Control.Concurrent.Async from the async package for managing tasks, which is a good idea imo. If you want to avoid that dependency, you could use forkIO instead (with e.g. a Chan or STM TChan to send events from the handler), the only issue is that this won’t automatically notify the other threads if the forked thread terminates due to an exception, whereas Async makes some nice exception safety guarantees.

Related

Detect a closed server connection via websockets in reflex-dom?

I've been using reflex and reflex-dom to recreate a web version of a boardgame, and I quite like it so far, but I require a websocket to alert a player when the other player has made a move.
Everything works great but if the server goes down, I can't find a way to detect that it happened and reconnect. Additionally, if you send an event to a server while it is down, it is just is eaten up without any error.
I'm using a stripped down version of the websockets example from https://github.com/reflex-frp/reflex-examples/blob/master/websocket-echo/src/Main.hs
{-# LANGUAGE RecursiveDo #-}
module Lib where
import Data.Monoid
import Reflex.Dom
import qualified Data.Text as T
import Data.Text.Encoding (encodeUtf8, decodeUtf8)
wsurl = "ws://127.0.0.1:5714"
-- wsurl = "ws://echo.websocket.org"
someFunc = mainWidget $ do
rec t <- textInput $ def & setValue .~ fmap (const "") newMessage
b <- button "Send"
text $ "Sending to " <> wsurl
let newMessage = fmap ((:[]) . encodeUtf8 . T.pack) $ tag (current $ value t) $ leftmost [b, textInputGetEnter t]
ws <- webSocket wsurl $ def & webSocketConfig_send .~ newMessage
receivedMessages <- foldDyn (\m ms -> ms ++ [m]) [] $ _webSocket_recv ws
el "p" $ text "Responses from :"
_ <- el "ul" $ simpleList receivedMessages $ \m -> el "li" $ dynText =<< mapDyn (T.unpack . decodeUtf8) m
return ()
I feel like there should be a way to do this with tickLossy to send pings with timeout, like some dynamic which returns websockets and then reconnects if a ping goes a certain amount of time without a response? But I'm having trouble envisioning what the code to reconnect would look like.
Edit: It was an issue with reflex-dom sending an event while a websocket was still in the pending state. I made a pull request, although I feel there is a better solution somewhere.
Edit: It was an issue with reflex-dom sending an event while a
websocket was still in the pending state. I made a pull request,
although I feel there is a better solution somewhere.
Just FYI, since the question was posted there have been some quite relevant extensions to the WebSocket API merged into reflex-dom:
you can close websockets via Events, see _webSocketConfig_close
you can specifiy if you want auto-reconnect, see _webSocketConfig_reconnect
there is an Event exposed for when the connection is closed, see _webSocket_close
there is an Event exposed for when an error occurs, see _webSocket_error
I believe the close event is exactly what you were looking for. It was just not available at the time.
It looks like on when websocket is closed, the library tries to reconnect:
start = do
ws <- liftIO $ newWebSocket wv url onMessage onOpen $ do
void $ forkIO $ do --TODO: Is the fork necessary, or do event handlers run in their own threads automatically?
liftIO $ writeIORef currentSocketRef Nothing
liftIO $ threadDelay 1000000
start
liftIO $ writeIORef currentSocketRef $ Just ws
return ()
(newWebSocket takes onClose event handler at the last argument)
And all the messages you are sending while reconnecting are ignored:
performEvent_ $ ffor (_webSocketConfig_send config) $ \payloads -> forM_ payloads $ \payload -> do
mws <- liftIO $ readIORef currentSocketRef
case mws of
Nothing -> return () -- Discard --TODO: should we do something better here? probably buffer it, since we handle reconnection logic; how do we verify that the server has received things?
Just ws -> do
liftIO $ webSocketSend ws payload
You probably should open an issue on their issue tracker. Or just find better library.

Extending the IRC bot from wiki.haskell.org with state

Problem
I'm attempting to extend the IRC bot from https://wiki.haskell.org/Roll_your_own_IRC_bot with some state that's updated every time the bot posts a message in the channel it's connected to.
The feature is: every time the command !last said is issued in the IRC channel, the bot should respond with a time stamp. To support this, the privmsg function needs to update the bot's state -- specifically the lastPosted record -- with a new timestamp every time it is called.
Work so far
I took the code from the bottom of the Haskell wiki page (which used a ReaderT to access information about the bot's environment) and tried to change out the ReaderT for a State Transformer (StateT). The results are below and as you can see, I didn't get very far.
import Data.List
import Network
import System.IO
import System.Exit
import System.Time
import Control.Arrow
import Control.Monad.State
import Control.Exception
import Text.Printf
server = "irc.freenode.org"
port = 6667
chan = "#testbot-test"
nick = "testbottest"
-- The 'Net' monad, a wrapper over IO, carrying the bot's immutable state.
type Net = StateT Bot IO
data Bot = Bot { socket :: Handle, lastPosted :: ClockTime }
-- Set up actions to run on start and end, and run the main loop
main :: IO ()
main = bracket connect disconnect loop
where
disconnect = hClose . socket
loop st = runStateT run st
-- Connect to the server and return the initial bot state
connect :: IO Bot
connect = notify $ do
h <- connectTo server (PortNumber (fromIntegral port))
t <- getClockTime
hSetBuffering h NoBuffering
return (Bot h t)
where
notify a = bracket_
(printf "Connecting to %s ... " server >> hFlush stdout)
(putStrLn "done.")
a
-- We're in the Net monad now, so we've connected successfully
-- Join a channel, and start processing commands
run :: Net ()
run = do
write "NICK" nick
write "USER" (nick ++ " 0 * :test bot")
write "JOIN" chan
gets socket >>= listen
-- Process each line from the server
listen :: Handle -> Net ()
listen h = forever $ do
s <- init `fmap` liftIO (hGetLine h)
liftIO (putStrLn s)
if ping s then pong s else eval (clean s)
where
forever a = a >> forever a
clean = drop 1 . dropWhile (/= ':') . drop 1
ping x = "PING :" `isPrefixOf` x
pong x = write "PONG" (':' : drop 6 x)
-- Dispatch a command
eval :: String -> Net ()
eval "!quit" = write "QUIT" ":Exiting" >> liftIO (exitWith ExitSuccess)
-- Posting when something was last posted shouldn't count as last posted.
eval "!last said" = getLastPosted >>= (\t -> write "PRIVMSG" (chan ++ " :" ++ t))
eval x | "!id " `isPrefixOf` x = privmsg (drop 4 x)
eval _ = return () -- ignore everything else
getLastPosted :: Net String
getLastPosted = do
t <- gets lastPosted
return $ show t
-- Send a privmsg to the current chan + server
privmsg :: String -> Net ()
privmsg s = write "PRIVMSG" (chan ++ " :" ++ s)
-- Send a message out to the server we're currently connected to
write :: String -> String -> Net ()
write s t = do
h <- gets socket
liftIO $ hPrintf h "%s %s\r\n" s t
liftIO $ printf "> %s %s\n" s t
Other support avenues explored
spent a couple of days reading up on ReaderT, StateT and their non-transformer friends Reader and State,
checking Stack Overflow for anyone with a similar problem, but the only other IRC bot question threaded the socket as an argument to every function that needed it (instead of using a ReaderT),
Tweeted Don S. the original author of the wiki page
asked in the Haskell IRC channel.
Question
How can the Haskell wiki IRC bot be extended to post a message, containing the date and time stamp of the last message posted? Preferably using an abstraction like ReaderT (only allowing mutable state) rather than passing state around in function arguments.
I got your code to compile by simply adding a >> return () to the definition of loop in your main:
main :: IO ()
main = bracket connect disconnect loop
where
disconnect = hClose . socket
loop st = (runStateT run st) >> return ()
This effectively ignores the return value of runStateT. Here are all of the variants of runState/runStateT:
runStateT - return both the final state and returned value
evalStateT - return only the final value
execStateT - return only the final state
Your original definition of loop was returning a pair (from runStateT), and this didn't type check since main wants a computation which returns just ().
To update the lastPosted field, consider this addition to the eval function which is triggered when the bot is sent the message !update time:
eval "!update time"
= do t <- liftIO getClockTime
bot <- get
put (bot { lastPosted = t })
We need to liftIO getClockTime since we are operating in the Net monad.
Then we get the old state and put the updated state. You can add this logic wherever you want to update the lastPosted time in the Net monad.
Full code is available at: http://lpaste.net/142931

How to force yesod/warp to close open file handles before handling next request?

I've written a small server which accepts registrations as POST requests and persists them by appending them to a file. As soon as I put this server under load (I use Apache JMeter with 50 concurrent threads and a repeat count of 10, and the post consists of one field with ~7k of text data), I get lots of "resource busy, file is locked" errors:
02/Nov/2013:18:07:11 +0100 [Error#yesod-core] registrations.txt: openFile: resource busy (file is locked) #(yesod-core-1.2.4.2:Yesod.Core.Class.Yesod ./Yesod/Core/Class/Yesod.hs:485:5)
Here is a stripped-down version of the code:
{-# LANGUAGE QuasiQuotes, TemplateHaskell, MultiParamTypeClasses, OverloadedStrings, TypeFamilies #-}
import Yesod
import Text.Hamlet
import Control.Applicative ((<$>), (<*>))
import Control.Monad.IO.Class (liftIO)
import Data.Text (Text, pack, unpack)
import Data.String
import System.IO (withFile, IOMode(..), hPutStrLn)
data Server = Server
data Registration = Registration
{ text :: Text
}
deriving (Show, Read)
mkYesod "Server" [parseRoutes|
/reg RegR POST
|]
instance Yesod Server
instance RenderMessage Server FormMessage where
renderMessage _ _ = defaultFormMessage
postRegR :: Handler Html
postRegR = do
result <- runInputPost $ Registration
<$> ireq textField "text"
liftIO $ saveRegistration result
defaultLayout [whamlet|<p>#{show result}|]
saveRegistration :: Registration -> IO ()
saveRegistration r = withFile "registrations.txt" AppendMode (\h -> hPutStrLn h $ "+" ++ show r)
main :: IO ()
main = warp 8080 Server
I compiled the code on purpose without -threaded, and the OS shows only a single thread running. Nonetheless it looks to me like the requests are not completely serialised, and a new request is already handled before the old one has been written to disk.
Could you tell me how I can avoid the error message and ensure that all requests are handled successfully? Performance is not an issue yet.
It's perfectly OK to write to a Handle from several threads. In fact, Handles have MVars inside them to prevent weird concurrent behaviour. What you probably want is not to handle [sic] MVars by hand (which can lead to deadlock if, for instance, a handler throws an exception) but lift the withFile call outside the individual handler threads. The file stays open all the time - opening it on each request would be slow anyway.
I don't know much about Yesod, but I'd recommend something like this (probably doesn't compile):
data Server = Server { handle :: Handle }
postRegR :: Handler Html
postRegR = do
h <- handle `fmap` getYesod
result <- runInputPost $ Registration
<$> ireq textField "text"
liftIO $ saveRegistration h result
defaultLayout [whamlet|<p>#{show result}|]
saveRegistration :: Handle -> Registration -> IO ()
saveRegistration h r = hPutStrLn h $ "+" ++ show r
main :: IO ()
main = withFile "registrations.txt" AppendMode $ \h -> warp 8080 (Server h)
-- maybe there's a better way?
Aside: if you wanted to file to be written asynchronously you could write to a queue (if it were a log file or something), but in your use case you probably want to let the user know if their registration failed, so I'd recommend staying with this form.
Even without -threaded the Haskell runtime will have several "green threads" running cooperatively. You need to use Control.Concurrent to limit access to the file because you cannot have several threads writing to it at once.
The easiest way is to have an MVar () in your Server and have each request "take" the unit from the MVar before opening the file and then put it back after the file operation has been completed. You can use bracket to ensure that the lock is released even if writing the file fails. E.g. something like
import Control.Concurrent
import Control.Exception (bracket_)
type Lock = MVar ()
data Server = Server { fileLock :: Lock }
saveRegistration :: Registration -> Lock -> IO ()
saveRegistration r lock = bracket_ acquire release updateFile where
acquire = takeMVar lock
release = putMVar lock ()
updateFile =
withFile "registrations.txt" AppendMode (\h -> hPutStrLn h $ "+" ++ show r)

Haskell - Actor based mutability

I'm working on a haskell network application and I use the actor pattern to manage multithreading. One thing I came across is how to store for example a set of client sockets/handles. Which of course must be accessible for all threads and can change when clients log on/off.
Since I'm coming from the imperative world I thought about some kind of lock-mechanism but when I noticed how ugly this is I thought about "pure" mutability, well actually it's kind of pure:
import Control.Concurrent
import Control.Monad
import Network
import System.IO
import Data.List
import Data.Maybe
import System.Environment
import Control.Exception
newStorage :: (Eq a, Show a) => IO (Chan (String, Maybe (Chan [a]), Maybe a))
newStorage = do
q <- newChan
forkIO $ storage [] q
return q
newHandleStorage :: IO (Chan (String, Maybe (Chan [Handle]), Maybe Handle))
newHandleStorage = newStorage
storage :: (Eq a, Show a) => [a] -> Chan (String, Maybe (Chan [a]), Maybe a) -> IO ()
storage s q = do
let loop = (`storage` q)
(req, reply, d) <- readChan q
print ("processing " ++ show(d))
case req of
"add" -> loop ((fromJust d) : s)
"remove" -> loop (delete (fromJust d) s)
"get" -> do
writeChan (fromJust reply) s
loop s
store s d = writeChan s ("add", Nothing, Just d)
unstore s d = writeChan s ("remove", Nothing, Just d)
request s = do
chan <- newChan
writeChan s ("get", Just chan, Nothing)
readChan chan
The point is that a thread (actor) is managing a list of items and modifies the list according to incoming requests. Since thread are really cheap I thought this could be a really nice functional alternative.
Of course this is just a prototype (a quick dirty proof of concept).
So my question is:
Is this a "good" way of managing shared mutable variables (in the actor world) ?
Is there already a library for this pattern ? (I already searched but I found nothing)
Regards,
Chris
Here is a quick and dirty example using stm and pipes-network. This will set up a simple server that allows clients to connect and increment or decrement a counter. It will display a very simple status bar showing the current tallies of all connected clients and will remove client tallies from the bar when they disconnect.
First I will begin with the server, and I've generously commented the code to explain how it works:
import Control.Concurrent.STM (STM, atomically)
import Control.Concurrent.STM.TVar
import qualified Data.HashMap.Strict as H
import Data.Foldable (forM_)
import Control.Concurrent (forkIO, threadDelay)
import Control.Monad (unless)
import Control.Monad.Trans.State.Strict
import qualified Data.ByteString.Char8 as B
import Control.Proxy
import Control.Proxy.TCP
import System.IO
main = do
hSetBuffering stdout NoBuffering
{- These are the internal data structures. They should be an implementation
detail and you should never expose these references to the
"business logic" part of the application. -}
-- I use nRef to keep track of creating fresh Ints (which identify users)
nRef <- newTVarIO 0 :: IO (TVar Int)
{- hMap associates every user (i.e. Int) with a counter
Notice how I've "striped" the hash map by storing STM references to the
values instead of storing the values directly. This means that I only
actually write the hashmap when adding or removing users, which reduces
contention for the hash map.
Since each user gets their own unique STM reference for their counter,
modifying counters does not cause contention with other counters or
contention with the hash map. -}
hMap <- newTVarIO H.empty :: IO (TVar (H.HashMap Int (TVar Int)))
{- The following code makes heavy use of Haskell's pure closures. Each
'let' binding closes over its current environment, which is safe since
Haskell is pure. -}
let {- 'getCounters' is the only server-facing command in our STM API. The
only permitted operation is retrieving the current set of user
counters.
'getCounters' closes over the 'hMap' reference currently in scope so
that the server never needs to be aware about our internal
implementation. -}
getCounters :: STM [Int]
getCounters = do
refs <- fmap H.elems (readTVar hMap)
mapM readTVar refs
{- 'init' is the only client-facing command in our STM API. It
initializes the client's entry in the hash map and returns two
commands: the first command is what the client calls to 'increment'
their counter and the second command is what the client calls to log
off and delete
'delete' command.
Notice that those two returned commands each close over the client's
unique STM reference so the client never needs to be aware of how
exactly 'init' is implemented under the hood. -}
init :: STM (STM (), STM ())
init = do
n <- readTVar nRef
writeTVar nRef $! n + 1
ref <- newTVar 0
modifyTVar' hMap (H.insert n ref)
let incrementRef :: STM ()
incrementRef = do
mRef <- fmap (H.lookup n) (readTVar hMap)
forM_ mRef $ \ref -> modifyTVar' ref (+ 1)
deleteRef :: STM ()
deleteRef = modifyTVar' hMap (H.delete n)
return (incrementRef, deleteRef)
{- Now for the actual program logic. Everything past this point only uses
the approved STM API (i.e. 'getCounters' and 'init'). If I wanted I
could factor the above approved STM API into a separate module to enforce
the encapsulation boundary, but I am lazy. -}
{- Fork a thread which polls the current state of the counters and displays
it to the console. There is a way to implement this without polling but
this gets the job done for now.
Most of what it is doing is just some simple tricks to reuse the same
console line instead of outputting a stream of lines. Otherwise it
would be just:
forkIO $ forever $ do
ns <- atomically getCounters
print ns
-}
forkIO $ (`evalStateT` 0) $ forever $ do
del <- get
lift $ do
putStr (replicate del '\b')
putStr (replicate del ' ' )
putStr (replicate del '\b')
ns <- lift $ atomically getCounters
let str = show ns
lift $ putStr str
put $! length str
lift $ threadDelay 10000
{- Fork a thread for each incoming connection, which listens to the client's
commands and translates them into 'STM' actions -}
serve HostAny "8080" $ \(socket, _) -> do
(increment, delete) <- atomically init
{- Right now, just do the dumb thing and convert all keypresses into
increment commands, with the exception of the 'q' key, which will
quit -}
let handler :: (Proxy p) => () -> Consumer p Char IO ()
handler () = runIdentityP loop
where
loop = do
c <- request ()
unless (c == 'q') $ do
lift $ atomically increment
loop
{- This uses my 'pipes' library. It basically is a high-level way to
say:
* Read binary packets from the socket no bigger than 4096 bytes
* Get the first character from each packet and discard the rest
* Handle the character using the above 'handler' function -}
runProxy $ socketReadS 4096 socket >-> mapD B.head >-> handler
{- The above pipeline finishes either when the socket closes or
'handler' stops looping because it received a 'q'. Either case means
that the client is done so we log them out using 'delete'. -}
atomically delete
Next up is the client, which simply opens a connections and forwards all key presses as single packets:
import Control.Monad
import Control.Proxy
import Control.Proxy.Safe
import Control.Proxy.TCP.Safe
import Data.ByteString.Char8 (pack)
import System.IO
main = do
hSetBuffering stdin NoBuffering
hSetEcho stdin False
{- Again, this uses my 'pipes' library. It basically says:
* Read characters from the console using 'commands'
* Pack them into a binary format
* send them to a server running at 127.0.0.1:8080
This finishes looping when the user types a 'q' or the connection is
closed for whatever reason.
-}
runSafeIO $ runProxy $ runEitherK $
try . commands
>-> mapD (\c -> pack [c])
>-> connectWriteD Nothing "127.0.0.1" "8080"
commands :: (Proxy p) => () -> Producer p Char IO ()
commands () = runIdentityP loop
where
loop = do
c <- lift getChar
respond c
unless (c == 'q') loop
It's pretty simple: commands generates a stream of Chars, which then get converted to ByteStrings and then sent as packets to the server.
If you run the server and a few clients and have them each type in a few keys, your server display will output a list showing how many keys each client typed:
[1,6,4]
... and if some of the clients disconnect they will be removed from the list:
[1,4]
Note that the pipes component of these examples will simplify greatly in the upcoming pipes-4.0.0 release, but the current pipes ecosystem still gets the job done as is.
First, I'd definitely recommend using your own specific data type for representing commands. When using (String, Maybe (Chan [a]), Maybe a) a buggy client can crash your actor simply by sending an unknown command or by sending ("add", Nothing, Nothing), etc. I'd suggest something like
data Command a = Add a | Remove a | Get (Chan [a])
Then you can pattern match on commands in storage in a save way.
Actors have their advantages, but also I feel that they have some drawbacks. For example, getting an answer from an actor requires sending it a command and then waiting for a reply. And the client can't be completely sure that it gets a reply and that the reply will be of some specific type - you can't say I want only answers of this type (and how many of them) for this particular command.
So as an example I'll give a simple, STM solution. It'd be better to use a hash table or a (balanced tree) set, but since Handle implements neither Ord nor Hashable, we can't use these data structures, so I'll keep using lists.
module ThreadSet (
TSet, add, remove, get
) where
import Control.Monad
import Control.Monad.STM
import Control.Concurrent.STM.TVar
import Data.List (delete)
newtype TSet a = TSet (TVar [a])
add :: (Eq a) => a -> TSet a -> STM ()
add x (TSet v) = readTVar v >>= writeTVar v . (x :)
remove :: (Eq a) => a -> TSet a -> STM ()
remove x (TSet v) = readTVar v >>= writeTVar v . delete x
get :: (Eq a) => TSet a -> STM [a]
get (TSet v) = readTVar v
This module implements a STM based set of arbitrary elements. You can have multiple such sets and use them together in a single STM transaction that succeeds or fails at once. For example
-- | Ensures that there is exactly one element `x` in the set.
add1 :: (Eq a) => a -> TSet a -> STM ()
add1 x v = remove x v >> add x v
This would be difficult with actors, you'd have to add it as another command for the actor, you can't compose it of existing actions and still have atomicity.
Update: There is an interesting article explaining why Clojure designers chose not to use actors. For example, using actors, even if you have many reads and only very little writes to a mutable structure, they're all serialized, which can greatly impact performance.

Finalization in Pipes-2.1.0 package

I am using the Pipes-2.1.0 package and the zeromq3-haskell package to construct a little message pipeline. Everything seems to be going well except that I am having trouble understanding finalization of Frames.
In the following Frame I acquire two resources; a zeromq context, and a zeromq socket. Then I continuously wait for messages (in the form of ByteStrings) to publish on the zeromq socket.
{-# LANGUAGE RebindableSyntax #-}
{-# LANGUAGE ScopedTypeVariables #-}
module PipesZeroMQ where
import Control.Frame
import Control.IMonad.Do
import Control.IMonad.Trans
import qualified Control.Monad as M
import Data.ByteString (ByteString)
import Data.String
import Prelude hiding (Monad(..))
import qualified System.ZMQ3 as ZMQ
type Address = String
fromList :: (M.Monad m) => [b] -> Frame b m (M a) (M a) ()
fromList xs = mapMR_ yield xs
publisher :: Address -> Frame Void IO (M ByteString) C ()
publisher addr = do
c <- liftU $ ZMQ.init 1
s <-liftU $ ZMQ.socket c ZMQ.Pub
liftU $ ZMQ.bind s addr
liftU $ print "Socket open for business!!!"
foreverR $ do
bs <- await
finallyF (ZMQ.close s M.>> ZMQ.term c M.>> print "ZMQ socket closed") $ do
(liftU $ ZMQ.send s [] bs)
(liftU (print "Sending message"))
Now if I try this:
λ> runFrame $ (publisher localAddress) <-< (fromList ["This", "that", "that"] >> close)
I get this:
"Socket open for business"
"Sending message"
"ZMQ socket closed"
*** Exception: ZMQError { errno = 88, source = "send", message = "Socket operation on non-socket" }
publisher finalizes after receiving but one BytesString.
Why is this happening?
What am I misunderstanding about finalization using Frames in Pipes-2.1.0?
Does the tree outside stand a chance if I start attacking it?
You made a mistake when writing the publisher function:
foreverR $ do
bs <- await
finallyF (ZMQ.close s M.>> ZMQ.term c M.>> print "ZMQ socket closed") $ do
(liftU $ ZMQ.send s [] bs)
(liftU (print "Sending message"))
You probably wanted to place the finallyF OUTSIDE the foreverR loop:
finallyF (...) $ foreverR $ do
bs <- await
liftU $ ZMQ.send s [] bs)
liftU (print "Sending message")
The way you wrote it, it finalizes after each send, so it's doing exactly what you told it to do: finalize after every send. finallyF calls the finalizer once the action it wraps is complete, both if it terminates successfully or unsuccessfully. You could also use catchF in that case, since the loop never terminates anyway:
catchF (...) $ foreverR $ do
bs <- await
liftU $ ZMQ.send s [] bs)
liftU (print "Sending message")
Alternatively, you could have kept it inside the loop but switched to catchF so that the finalizer doesn't get run after each send:
foreverR $ do
bs <- await
catchF (ZMQ.close s M.>> ZMQ.term c M.>> print "ZMQ socket closed") $ do
(liftU $ ZMQ.send s [] bs)
(liftU (print "Sending message"))
Also, if you are planning on writing a zeroMQ library based on pipes, keep in touch with me because I'm planning to return frames back to an ordinary monad in the next release with a lot of new enhancements to functionality, too, such as the ability to close and reinitialize resources. To reach me use my gmail.com address with username Gabriel439.

Resources