monadically iterating over a Data.Sequence - haskell

I have a Data.Sequence which I need to iterate over. The problem is that its stateful, and the sequence may grow as a result of said iteration.
data Chart = Chart {
charts :: M.Map Int (Seq.Seq RState), --map from position to list of chart states
...
} deriving (Show)
processChartSeq :: Int -> Int -> State Chart ()
processChartSeq chtIndx stIndx = do s <- get
let seq = fromJust $ M.lookup chtIndx (charts s)
rstate = Seq.index seq stIndx
processState rstate
when (stIndx < Seq.length seq) (processChartSeq chtIndx (stIndx+1))
So I'm doing this with explicit recursion, but it seems clunky. Furthermore, it seems like a common thing to do. What common monadic control structure did I re-invent, badly?

To answer the initial question, you "reinvented" the state monad within the state monad itself! You could move the stuff you're explicitly threading into your state to avoid passing it directly. On the other hand, for simple things like this, I have no problem passing things around explicitly, and often find it cleaner.
Finally, as Daniel Wagner notes, you should be pushing and popping from the top as much as possible, for the sake of efficiency.

Related

How to modify parts of a State in Haskell

I have a number of operations which modify a System. System is defined like this:
data System = Sys {
sysId :: Int,
sysRand :: StdGen,
sysProcesses :: ProcessDb,
sysItems :: ItemDb
}
with e.g.
type ProcessDb = M.Map Int Process
But I also have some functions, which do not need access to the full System, but have types like this:
foo' :: (Process, ItemDb) -> ((Process, ItemDb),[Event])
Currently I gave them types like
foo: System -> (System, [Event])
But this is a needlessly broad interface. To use the narrow interface above in conjuntion with System I would have to extract a single Process and the ItemDb from System, run foo' and then modify System with the results.
This is quite some unwrapping and wrapping and results in more lines of code than just passing system as a whole and let foo extract whatever it needs. In the latter case, the wrapping and unwrapping is mingled with the actual foo' operation and I have the feeling that these two aspects should be separated.
I suppose I need some kind of lifting operation which turns a narrow foo' into a foo. I suppose I could write this, but I would have to write such a lifter for every signature of the narrow functions, resulting is lots of different lifters.
is there an idiom how to solve such problems?
is it worth bothering?
One common solution is to use a class, possibly created by the Template Haskell magic of Control.Lens.TH.makeClassy. The gist is that you pass in the whole System, but you don't let the function know that that's what you're giving it. All it's allowed to know is that what you're giving it offers methods for getting and/or modifying the pieces it's supposed to handle.
I ended up writing a function which work on any State and which requires a "Lens" which captures the specfic transformation from the bigger State to the smaller State and back
focus :: (Lens s' s) -> State s' a -> State s a
focus lens ms'= do
s <- get
let (s', set) = lens s
(a, s'') = runState ms' s'
put (set s'')
return a
It allows me to write things like
run :: ExitP -> State SimState Log
...
do
evqs' <-focus onSys $ step (t,evt)
...
Where step operates on the "smaller" state
step :: Timed Event -> State Sys.System [EventQu]
Here onSys is a "Lens" and it works like this:
onSys :: Lens Sys.System SimState
onSys (Sis e s) = (s, Sis e)
where
data SimState = Sis {
events :: EventQu,
sisSys :: Sys.System
I suppose the existing Lens libraries follow a similar approach, but do much more magic, like creating lenses automatically. I did shy away from lenses. Instead I was pleased to realise that all it takes was a few lines of codes to get what I need.

linear congruent generator in haskell

This is a very simple linear-congruent pseudo-random number generator. It works fine when I seed it, but I want to make it so that it self-seeds with every produced number. Problem is that I don't know how to do that in Haskell where the notion of variables does not exist. I can feed the produced number recursively, but then my result would be a list of integers instead of a single number.
linCongGen :: Int -> Int
linCongGen seed = ((2*seed) + 3) `mod` 100
I'll summarize the comments a bit more meaningfully. The simplest solution is, like you observed, an infinite list of the sequence of generated elements. Then, every time you want to get a new number, pop off the head of that list.
linCongGen :: Integral a => a -> [a]
linCongGen = iterate $ \x -> ((2*x) + 3) `mod` 100
That said, here is a solution (which I do not agree with), but which does what I think you want. For mutable state, we usually use IORef, which is sort of like a reference or pointer. Here is the code. Please read the disclaimer afterwards though.
import Data.IORef
import System.IO.Unsafe
seed :: IORef Int
seed = unsafePerformIO $ newIORef 71
linCongGen :: IO Int
linCongGen = do previous <- readIORef seed
modifyIORef' seed $ \x -> ((2*x) + 3) `mod` 100
return previous
And here is a sample usage printing out the first hundred numbers generated: main = replicateM_ 100 $ getRandom >>= print (you'll need to have Control.Monad imported too for replicateM_).
DISCLAIMER
This is a bit of a hacky approach described here. As the link says "Maybe the need for global mutable state is a symptom of bad design." The link also has a good description of a more intelligent workaround. Making an IORef is an inherently IO operation, and we really shouldn't be using unsafePerformIO on it. If you find yourself fighting Haskell in this way, it's because Haskell was designed to get in your way when you are doing things you shouldn't.
That said, I find comfort in knowing that this approach is also the one using in System.Random (the standard random number module) to define the initial seed (check out the source).

Accessing vector element by index using lens

I'm looking for a way to reference an element of a vector using lens library...
Let me try to explain what I'm trying to achieve using a simplified example of my code.
I'm working in this monad transformer stack (where StateT is the focus, everything else is not important)
newtype MyType a = MyType (StateT MyState (ExceptT String IO) a)
MyState has a lot of fields but one of those is a vector of clients which is a data type I defined:
data MyState = MyState { ...
, _clients :: V.Vector ClientT
}
Whenever I need to access one of my clients I tend to do it like this:
import Control.Lens (use)
c <- use clients
let neededClient = c V.! someIndex
... -- calculate something, update client if needed
clients %= (V.// [(someIndex, updatedClient)])
Now, here is what I'm looking for: I would like my function to receive a "reference" to the client I'm interested in and use it (retrieve it from State, update it if needed).
In order to clear up what I mean here is a snippet (that won't compile even in pseudo code):
...
myFunction (clients.ix 0)
...
myFunction clientLens = do
c <- use clientLens -- I would like to access a client in the vector
... -- calculate stuff
clientLens .= updatedClient
Basically, I would like to pass to myFunction something from Lens library (I don't know what I'm passing here... Lens? Traversal? Getting? some other thingy?) which will allow me to point at particular element in the vector which is kept in my StateT. Is it at all possible? Currently, when using "clients.ix 0" I get an error that my ClientT is not an instance of Monoid.
It is a very dumbed down version of what I have. In order to answer the question "why I need it this way" requires a lot more explanation. I'm interested if it is possible to pass this "reference" which will point to some element in my vector which is kept in State.
clients.ix 0 is a traversal. In particular, traversals are setters, so setting and modifying should work fine:
clients.ix 0 .= updatedClient
Your problem is with use. Because a traversal doesn't necessarily contain exactly one value, when you use a traversal (or use some other getter function on it), it combines all the values assuming they are of a Monoid type.
In particular,
use (clients.ix n)
would want to return mempty if n is out of bounds.
Instead, you can use the preuse function, which discards all but the first target of a traversal (or more generally, a fold), and wraps it in a Maybe. E.g.
Just c <- preuse (clients.ix n)
Note this will give a pattern match error if n is out of bounds, since preuse returns Nothing then.

How should I organize normal and "state" versions of functions in the state monad?

I decided to try the State Monad to try and clean up some of the projects that I've started. I ran into a naming/compartmentalization problem.
If I have the following objects:
data Obj = Player { oPos :: Point }
data World = World { wKeys :: [Key], wPlayer :: Obj }
I might have a convenience function like:
setPlayer :: Obj -> World -> World
setPlayer o w = w{wPlayer = o}
and a matching state operation like:
setPlayerW :: Obj -> WorldState ()
setPlayerW o = get >>= put . setPlayer o
which uses the other convenience function; for convenience.
What is the typical naming convention for something like this? I post-pended the state version with a W, but that's kind of ugly.
And are the "state-versions" typical segregated from the "object-versions" in a separate file?
Am I going about this wrong completely? Is there a better set-up then having 2 different versions of any operations I might need?
Personally, I wouldn't have a separate function for the state version myself. Instead, use the modify function instead:
do let o = player
modify (setPlayer o)
something else
In a sense, the naming convention you're looking for would just be the same as using modify except folded into the names of each function. When I find myself naming functions like this, I generally try to find some way to organize them in the language instead of using their names. Sometimes, like here, an existing function is all you need; other times, it involves creating a function of your own to achieve the same end or extracting things into a module.
The core idea is that it's better to reify patterns in your code using first-class language constructs instead of encoding them indirectly into the names. (Of course, if this ends up really awkward, you shouldn't do it, but it's fine here.)
This looks like a good time to introduce the lens library. This is a bit of a daunting library at first, and I still struggle with some of its more complex features (the rabbit hole is pretty deep for this one), but it can really simplify a lot of your State code. To use it, it's recommended to change your data types a bit first so you can utilize template haskell:
import Control.Monad
import Control.Monad.State
import Control.Lens
-- Made assumption on what Point would look like
data Point = Point { _x :: Int, _y :: Int } deriving (Eq, Show)
data Obj = Player { _oPos :: Point } deriving (Eq, Show)
data World = World { _wKeys :: [Key], _wPlayer :: Obj } deriving (Eq, Show)
makeLenses ''Point
makeLenses ''Obj
makeLenses ''World
-- Also assumed this type
type WorldState = StateT World IO
Then you can write code that looks very much imperative using the generated lenses
setPlayerW :: Obj -> WorldState ()
setPlayerW o = wPlayer .= o
Or if you want the non-monadic version
setPlayer :: Obj -> World -> World
setPlayer o = wPlayer .~ o
So this uses the .= operator to set the wPlayer field of the World state to the new value. More impressively, you could use it to write code like
moveUp, moveDown, moveLeft, moveRight :: WorldState ()
moveUp = wPlayer.oPos.y += 1
moveDown = wPlayer.oPos.y -= 1
moveLeft = wPlayer.oPos.x -= 1
moveRight = wPlayer.oPos.x += 1
Which makes it look a lot like an object oriented language using normal function composition. A quick test:
game :: WorldState ()
game = do
replicateM_ 3 moveUp
replicateM_ 5 moveLeft
replicateM_ 10 moveRight
replicateM_ 6 moveDown
> execStateT game $ World [] $ Player $ Point 0 0
World {_wKeys = [], _wPlayer = Player {_oPos = Point {_x = 5, _y = -3}}}
There are a lot of really interesting and useful operators in the lens library, and there's a lot of support built-in for using it with StateT stacks.
Another nice feature is the zoom function, which takes a lens and "zooms in" on it, letting you operate as if your state has the value of whatever you zoomed onto. An example would be
game = zoom (wPlayer.oPos) $ do
y += 3
x -= 5
x += 10
y -= 6
And this would produce the same result as before. This is generally more efficient (fewer layers to unwrap at each step) and can be much cleaner.

Short-lived memoization in Haskell?

In an object-oriented language when I need to cache/memoize the results of a function for a known life-time I'll generally follow this pattern:
Create a new class
Add to the class a data member and a method for each function result I want to cache
Implement the method to first check to see if the result has been stored in the data member. If so, return that value; else call the function (with the appropriate arguments) and store the returned result in the data member.
Objects of this class will be initialized with values that are needed for the various function calls.
This object-based approach is very similar to the function-based memoization pattern described here: http://www.bardiak.com/2012/01/javascript-memoization-pattern.html
The main benefit of this approach is that the results are kept around only for the life time of the cache object. A common use case is in the processing of a list of work items. For each work item one creates the cache object for that item, processes the work item with that cache object then discards the work item and cache object before proceeding to the next work item.
What are good ways to implement short-lived memoization in Haskell? And does the answer depend on if the functions to be cached are pure or involve IO?
Just to reiterate - it would be nice to see solutions for functions which involve IO.
Let's use Luke Palmer's memoization library: Data.MemoCombinators
import qualified Data.MemoCombinators as Memo
import Data.Function (fix) -- we'll need this too
I'm going to define things slightly different from how his library does, but it's basically the same (and furthermore, compatible). A "memoizable" thing takes itself as input, and produces the "real" thing.
type Memoizable a = a -> a
A "memoizer" takes a function and produces the memoized version of it.
type Memoizer a b = (a -> b) -> a -> b
Let's write a little function to put these two things together. Given a Memoizable function and a Memoizer, we want the resultant memoized function.
runMemo :: Memoizer a b -> Memoizable (a -> b) -> a -> b
runMemo memo f = fix (f . memo)
This is a little magic using the fixpoint combinator (fix). Never mind that; you can google it if you are interested.
So let's write a Memoizable version of the classic fib example:
fib :: Memoizable (Integer -> Integer)
fib self = go
where go 0 = 1
go 1 = 1
go n = self (n-1) + self (n-2)
Using a self convention makes the code straightforward. Remember, self is what we expect to be the memoized version of this very function, so recursive calls should be on self. Now fire up ghci.
ghci> let fib' = runMemo Memo.integral fib
ghci> fib' 10000
WALL OF NUMBERS CRANKED OUT RIDICULOUSLY FAST
Now, the cool thing about runMemo is you can create more than one freshly memoized version of the same function, and they will not share memory banks. That means that I can write a function that locally creates and uses fib', but then as soon as fib' falls out of scope (or earlier, depending on the intelligence of the compiler), it can be garbage collected. It doesn't have to be memoized at the top level. This may or may not play nicely with memoization techniques that rely on unsafePerformIO. Data.MemoCombinators uses a pure, lazy Trie, which fits perfectly with runMemo. Rather than creating an object which essentially becomes a memoization manager, you can simply create memoized functions on demand. The catch is that if your function is recursive, it must be written as Memoizable. The good news is you can plug in any Memoizer that you wish. You could even use:
noMemo :: Memoizer a b
noMemo f = f
ghci> let fib' = runMemo noMemo fib
ghci> fib' 30 -- wait a while; it's computing stupidly
1346269
Lazy-Haskell programming is, in a way, the memoization paradigm taken to a extreme. Also, whatever you do in an imperative language is possible in Haskell, using either IO monad, the ST monad, monad transformers, arrows, or you name what.
The only problem is that these abstraction devices are much more complicated than the imperative equivalent that you mentioned, and they need a pretty deep mind-rewiring.
I believe the above answers are both more complex than necessary, although they might be more portable than what I'm about to describe.
As I understand it, there is a rule in ghc that each value is computed exactly once when it's enclosing lambda expression is entered. You may thus create exactly your short lived memoization object as follows.
import qualified Data.Vector as V
indexerVector :: (t -> Int) -> V.Vector t -> Int -> [t]
indexerVector idx vec = \e -> tbl ! e
where m = maximum $ map idx $ V.toList vec
tbl = V.accumulate (flip (:)) (V.replicate m [])
(V.map (\v -> (idx v, v)) vec)
What does this do? It groups all the elements in the Data.Vector t passed as it's second argument vec according to integer computed by it's first argument idx, retaining their grouping as a Data.Vector [t]. It returns a function of type Int -> [t] which looks up this grouping by this pre-computed index value.
Our compiler ghc has promised that tbl shall only be thunked once when we invoke indexerVector. We may therefore assign the lambda expression \e -> tbl ! e returned by indexVector to another value, which we may use repeatedly without fear that tbl ever gets recomputed. You may verify this by inserting a trace on tbl.
In short, your caching object is exactly this lambda expression.
I've found that almost anything you can accomplish with a short term object can be better accomplished by returning a lambda expression like this.
You can use very same pattern in haskell too. Lazy evaluation will take care of checking whether value is evaluated already. It has been mentioned mupltiple times already but code example could be useful. In example below memoedValue will calculated only once when it is demanded.
data Memoed = Memoed
{ value :: Int
, memoedValue :: Int
}
memo :: Int -> Memoed
memo i = Memoed
{ value = i
, memoedValue = expensiveComputation i
}
Even better you can memoize values which depend on other memoized values. You shoud avoid dependecy loops. They can lead to nontermination
data Memoed = Memoed
{ value :: Int
, memoedValue1 :: Int
, memoedValue2 :: Int
}
memo :: Int -> Memoed
memo i = r
where
r = Memoed
{ value = i
, memoedValue1 = expensiveComputation i
, memoedValue2 = anotherComputation (memoedValue1 r)
}

Resources