Haskell loop until a key is pressed - haskell

I would like to run a program every n second and refresh the screen(certain lines) until i press the q key. How to achieve this?
I have checked the ansi-terminal package for refreshing certain part of the screen. I'm guessing that I should use haskeline to get the loop and user key press. Am I correct ?
If yes I can't find the function in haskeline.

You can do this pretty simply with nothing but the IO Monad with threadDelay from Control.Concurrent.
import Control.Concurrent
refreshInterval :: Int
refreshInterval = 10000 -- in microseconds
refreshAction :: IO ()
refreshAction = undefined
main :: IO ()
main = do refreshAction
threadDelay refreshInterval
main
With this structure, you can now incorporate a keyboard quit via any library that gives you keyboard input. For example, with haskeline, you can read input with getInputChar, compare it to your quit character, and exit if necessary. But doing it directly this way will leave you unable to read input while the thread sleeps with threadDelay. If your input library buffers reads, this might be ok -- the quit/break action will simply occur when the program gets around to checking for input. If you need more than that, you can either roll your own solution via forkIO as suggested in the comments, or move to a library that handles this sort of thing a bit better.
As an interesting side note, this is pretty easily solved within my OpenCV library using the parts of the library that wrap OpenCV's HighGUI functionality. I don't necessarily recommend pulling in that dependency just for this functionality, but it does technically solve your problem. I'll add that to this answer if anyone is interested.

Related

Why does changing my FRP implementation to be more reactive lag?

I got a version of snake working with the threepenny-gui library, but I didn't like the fact that I was explicitly calling newEvent and addStateUpdate manually instead of defining the behavior completely based on events, e.g. this:
(updates, addUpdate) <- liftIO newEvent
managerB <- accumB initialManager updates
on UI.tick timer $ \_ -> addUpdate $ \manager -> manager'
compared to:
managerB <- accumB initialManager $
UI.tick timer $> \manager -> manager'
IIUC the second is more idiomatic FRP, as it defines a behavior with the actual event instead of creating a proxy event to proxy updates through. But when I make this change, it causes one of two problems:
If I define managerB first (using RecursiveDo to access timer, which is defined below), nothing's rendered at all
If I move managerB to the end (using RecursiveDo to access managerB from the DOM elements), the initial movement when hitting an arrow key for the first time lags, and the frames render in a jerky fashion.
Am I doing something wrong? What's the idiomatic way I should structure these events/behaviors?
Code diff here: https://github.com/brandonchinn178/snake/compare/inline-event-handlers
A not particularly pretty workaround for the jerkiness, which I tested on the alternate branch in your repository, is, so to say, using both approaches at once: re-firing the tick event and using that instead of UI.tick timer to define managerB:
(timeE, fireTime) <- liftIO newEvent
on UI.tick timer $ \_ -> liftIO (fireTime ())
let managerUpdateE =
fmap concatenate . unions $
[ timeE $> getNextManagerState
-- Instead of: UI.tick timer $> getNextManagerState
-- etc.
The issue appears to be that plugging UI.tick timer directly into the event network somehow gets in the way of Threepenny sending the JavaScript calls needed to update the UI in a timely way. The indirection in using on with fireTime (which, in particular, should mean timeE happens notionally after UI.tick timer) seems to skirt around the problem. A less intrusive workaround would be, instead of introducing timeE, explicitly calling flushCallBuffer in a handler for UI.tick timer; in my tests, however, that reduced the jerkiness a lot but didn't eliminate it completely. (See also threepenny-gui issue #191 for possibly relevant background information.)
As for the delay on the first keystroke, it appears that can be eliminated by moving your invocation of UI.start timer to the very end of gui, after managerB and the rest of your event network is set up.
(On an additional note, it is probably a good idea to follow the recommendation of the Graphics.UI.Threepenny.Timer docs and set -threaded in the ghc-options for compiling your executable, even if that doesn't seem to have an effect on the problem you describe here.)

Reactive Banana: Bindings

I am working on a small terminal-based UI and I want to use Reactive Banana for describing interactions. The only external event I am interested in, is whether the user has pressed a key.
From what I gathered from the Frameworks documentation, I can either use polling to get the state of the terminal input buffer or pass an "event handler" to Reactive Banana. I'd rather prefer the latter, but I find the event handling section confusing. How exactly would I describe an event-driven version of getChar? In the end, I'd like to operate on an "Event Key" stream.
I have looked at the SDL and wx bindings, but I don't find them of much help, since they are littered with library related idiosyncracies, and I'd much appreciate a more concise explanation how event handlers, event loops and Reactive Banana mix together.
As it turns out, dealing with external events is remarkably flexible and simple with Reactive Banana. All that is needed is to create an event handler with "newAddHandler" which consists of two pieces, an "AddHandler" from which events can be created using the reactimate function, and a Handler which has to be invoked by the glue code (binding) that bridges Reactive Banana to the framework (in my case, I just feed the result of getChar to the handler).
Here is an example code that echos keystrokes:
echo = do
(keyEventHandler, fire) <- newAddHandler
-- Network Specification (echo keyboard input)
let networkDescription =
fromAddHandler keyEventHandler >>= -- Create event stream from handler
reactimate . fmap print -- Map print over event stream
compile networkDescription >>= actuate
-- Event Loop
hSetBuffering stdin NoBuffering
forever $ do
ready <- hReady stdin
if ready
then getChar >>= fire -- Create keyboad event
else return ()
The nice thing about reactive banana is that the callback function "fire" can be invoked from any context. Thus, the event loop could as well run in a thread or be invoked from a C library. More about this can be found by Heinrich's answer to How to implement a game loop in reactive-banana?.
However, catching arbitrary keyboard input events without polling is not as easy as I thought, POSIX won't allow it (if I am not mistaken) and even if it did the code would not be portable.
Also note that on Windows machines, you have to hit enter, I still have to find a way around that problem. More on this issue is found at the Haskell bugtracker

Concurrency considerations between pipes and non-pipes code

I'm in the process of wrapping a C library for some encoding in a pipes interface, but I've hit upon some design decisions that need to be made.
After the C library is set up, we hold on to an encoder context. With this, we can either encode, or change some parameters (let's call the Haskell interface to this last function tune :: Context -> Int -> IO ()). There are two parts to my question:
The encoding part is easily wrapped up in a Pipe Foo Bar IO (), but I would also like to expose tune. Since simultaneous use of the encoding context must be lock protected, I would need to take a lock at every iteration in the pipe, and protect tune with taking the same lock. But now I feel I'm forcing hidden locks on the user. Am I barking up the wrong tree here? How is this kind of situation normally resolved in the pipes ecosystem? In my case I expect the pipe that my specific code is part of to always run in its own thread, with tuning happening concurrently, but I don't want to force this point of view upon any users. Other packages in the pipes ecosystem do not seem to force their users like either.
An encoding context that is no longer used needs to be properly de-initialized. How does one, in the pipes ecosystem, ensure that such things (in this case performing som IO actions) are taken care of when the pipe is destroyed?
A concrete example would be wrapping a compression library, in which case the above can be:
The compression strength is tunable. We set up the pipe and it runs along merrily. How should one best go about allowing the compression strength setting to be changed while the pipe keeps running, assuming that concurrent access to the compression codec context must be serialized?
The compression library allocated a bunch of memory off the Haskell heap when set up, and we'll need to call some library function to clean this up when the pipe is torn down.
Thanks… this might all be obvious, but I'm quite new to the pipes ecosystem.
Edit: Reading this after posting, I'm quite sure it's the vaguest question I've ever asked here. Ugh! Sorry ;-)
Regarding (1), the general solution is to change your Pipe's type to:
Pipe (Either (Context, Int) Foo) Bar IO ()
In other words, it accepts both Foo inputs and tune requests, which it processes internally.
So let's then assume that you have two concurrent Producers corresponding to inputs and tune requests:
producer1 :: Producer Foo IO ()
producer2 :: Producer (Context, Int) IO ()
You can use pipes-concurrency to create a buffer that they both feed into, like this:
example = do
(output, input) <- spawn Unbounded
-- input :: Input (Either (Context, Int) Foo)
-- output :: Output (Either (Context, Int) Foo)
let io1 = runEffect $ producer1 >-> Pipes.Prelude.map Right >-> toOutput output
io2 = runEffect $ producer2 >-> Pipes.Prelude.map Left >-> toOutput output
as <- mapM async [io1, io2]
runEffect (fromInput >-> yourPipe >-> someConsumer)
mapM_ wait as
You can learn more about the pipes-concurrency library by reading this tutorial.
By forcing all tune requests to go through the same single-threaded Pipe you can ensure that you don't accidentally have two concurrent invocations of the tune function.
Regarding (2) there are two ways you can acquire a resource using pipes. The more sophisticated approach is to use the pipes-safe library, which provides a bracket function that you can use within a Pipe, but that is probably overkill for your purpose and only exists for acquiring and releasing multiple resources over the lifetime of a pipe. A simpler solution is just to use the following with idiom to acquire the pipe:
withEncoder :: (Pipe Foo Bar IO () -> IO r) -> IO r
withEncoder k = bracket acquire release $ \resource -> do
k (createPipeFromResource resource)
Then a user would just write:
withEncoder $ \yourPipe -> do
runEffect (someProducer >-> yourPipe >-> someConsumer)
You can optionally use the managed package, which simplifies the types a bit and makes it easier to acquire multiple resources. You can learn more about it from reading this blog post of mine.

Dealing with the current time in reactive-banana

How do you deal with the current time in reactive-banana?
Ideally I'd like to have a Behaviour which I can "poll" to get the current time. However, polling Behaviours with Events (via <# etc.) gives me the value of the Behaviour from the previous Event, not the current value. (I realise this is to avoid cyclic definitions which is indeed useful.)
I found fromPoll which I thought would help. Behaviours that are observed from fromPoll cannot depend on themselves, thus no cycles can be introduced by observing the behaviour just before this Event is fired rather than just after the previous Event fired.
A digression
In somewhat more formal terms I am suggesting that Events always occur at time t+ and Behaviours are always observed at time t- i.e. Events observe behaviours that happen an infinitessimally short time before them. New values of Behaviours generated by accumB and friends would always start from time t+ so could not be observed by Events which also happen at time t+.
Under this proposed semantics Behaviours created by fromPoll would be updated just before each Event is processed. Other Behaviours would be updated afterwards because they are created by accumB and friends.
My use case
Anyway, that's a significant digression to my main question. I want to know if there's some way to deal with current time (not the time of the previous Event) in reactive-banana. My use case is, for example, to keep track of the pings that entities send and if any of them hasn't sent a ping in a particular time interval to signal a warning event.
Of course I can and will fire off events very frequently, so my warnings won't be incorrect by a large amount. However it does seem to be a wart that they cannot be precise.
What's the right way of dealing with this?
Given your example use case, I think you should be fine if you stay away from fromPoll. To explain why, a few clarifications are needed. (Note: in what follows, "stream" refers to an Event t a, and "occurrence" to one of the firings which compose them.)
However, polling Behaviours with Events (via <# etc.) gives me the value of the Behaviour from the previous Event, not the current value.
I suppose you are alluding to explanations such as this one, from the docs for stepper:
Note that the smaller-than-sign in the comparision timex < time means that the value of the behavior changes "slightly after" the event occurrences. This allows for recursive definitions.
That delay, however, is only with respect to the stream used to define the behaviour (i.e. the one you pass to stepper/accumB) and any streams that are synchronised with it. For instance, suppose you have two independent streams, eTick and eTock, and the following network snippet:
eIncrement = (+1) <$ eTick
bCount = accumB 0 eIncrement
eCountTick = bCount <# eTick
eCountTock = bCount <# eTock
eIncrement and eCountTick are in sync with eTick, and so the value observed through eCountTick is the "old" value; that is, the value before the synchronised update. From the point of view given by eCountTock, however, none of that matters. To an observer using eCountTock, there is no delay to speak of, and the value is always the current one.
Behaviours that are observed from fromPoll cannot depend on themselves, thus no cycles can be introduced by observing the behaviour just before this Event is fired rather than just after the previous Event fired.
We are only concerned with streams synchronised with the one which updates the behaviour. Thus, as far as observed values go "just before the next occurrence" and "just after the previous occurrence" boil down to the same thing. fromPoll, however, muddles things quite a bit. It creates a behaviour which is updated whenever any occurrence happens in the event network; and so the updates are synchronised with the union of all streams. There is no such thing as a stream independent from a fromPoll event, and therefore the observed value will be affected by the delay however we observe it. That being so, fromPoll won't work for an application-driving clock, which requires tracking continuous change with some accuracy.
Implicit in all of the above is that reactive-banana has no built-in notion of time. There are only the "logical" time lines within each stream, which can be interwoven by merging streams. So if we want a current time behaviour our best bet is building one from an independent stream. Here is a demo of that approach, which will produce fresh and timely results as far as the precision of threadDelay allows:
{-# LANGUAGE RankNTypes #-}
module Main where
import Control.Concurrent
import Control.Monad
import Data.Time
import Reactive.Banana
import Reactive.Banana.Frameworks
main = do
let netDesc :: forall t. Frameworks t => Moment t ()
netDesc = do
(eTime, fireTime) <- newEvent
liftIO . forkIO . forever $
threadDelay (50 * 1000) >> getCurrentTime >>= fireTime
bTime <- flip stepper eTime <$> liftIO getCurrentTime
(eTick, fireTick) <- newEvent
liftIO . forkIO . forever $
threadDelay (5000 * 1000) >> fireTick ()
reactimate $ print <$> bTime <# eTick
network <- compile netDesc
actuate network >> threadDelay (52000 * 1000) >> pause network
bTime is updated through eTime each 0.05s; it is observed through eTick, a stream independent from eTime with occurrences every 5s. You can then use eTick and streams derived from it to observe and update your entities. Alternatively, you can combine bTime and the entity behaviours in applicative style to get, e.g. behaviours for the latest pings, to be observed with eTick.
Having a canonical time behaviour looks like a sound approach in your case; it is conceptually clear and readily generalisable for multiple ticks. In any case, other approaches that you can play with include getting rid of bTime and using eTick as a low-resolution current time stream (though that seems to make the threadDelay innacurcies build up faster), and getting rid of eTick by using changes to get a stream of freshly updated values from the behaviour (through that comes with its own quirks and annoyances, as the documentation hints at).

A way to form a 'select' on MVars without polling

I have two MVars (well an MVar and a Chan). I need to pull things out of the Chan and process them until the other MVar is not empty any more. My ideal solution would be something like the UNIX select function where I pass in a list of (presumably empty) MVars and the thread blocks until one of them is full, then it returns the full MVar. Try as I might I can think of no way of doing this beyond repeatedly polling each MVar with isEmptyMVar until I get false. This seems inefficient.
A different thought was to use throwTo, but it interrupts what ever is happening in the thread and I need to complete processing a job out the the Chan in an atomic fashion.
A final thought as I'm typing is to create a new forkIO for each MVar which tries to read its MVar then fill a newly created MVar with its own instance. The original thread can then block on that MVar. Are Haskell threads cheap enough to go running that many?
Haskell threads are very cheap, so you could solve it that way, but it sounds like STM would be a better fit for your problem. With STM you can do
do var <- atomically (takeTMVar a `orElse` takeTMVar b)
... do stuff with var
Because of the behavior of retry and orElse, this code tries to get a, then if that fails, get b. If both fail, it blocks until either of them is updated and tries again.
You could even use this to make your own rudimentary version of select:
select :: [TMVar a] -> STM a
select = foldr1 orElse . map takeTMVar
How about using STM versions, TChan and TVar, with the retry and orElse behavior?
Implementing select is one of STM's nice capabilities. From "Composable Memory Transactions":
Beyond this, we also provide orElse,
which allows them to be composed as alternatives, so that
the second is run if the first retries (Section 3.4). This ability allows threads to wait for many things at once, like the
Unix select system call – except that orElse composes well,
whereas select does not.
orElse in RWH.
The STM package
Papers on Haskell's STM

Resources