Walking a huge game tree - search

I have a game tree that is too big to walk in its entirety.
How can I write a function that will evaluate the tree until a time limit or depth limit is reached?

It would help to have a bit more detail, I think. Also, you raise two entirely separate issues--do you want both limits applied simultaneously, or are you looking for how to do each independently? That said, in rough terms:
Time limit: This is clearly impossible without using IO, to start with. Assuming your game tree traversal function is largely pure you'd probably prefer not to intertwine it with a bunch of time-tracking that determines control flow. I think the simplest thing here is probably to have the traversal produce a stream of progressively better results, place each "best result so far" into an MVar or such, and run it on a separate thread. If the time limit is reached, just kill the thread and take the current value from the MVar.
Depth limit: The most thorough way to do this would be to simply perform a breadth-first search, wouldn't it? If that's not viable for whatever reason, I don't think there's any better solution than the obvious one of simply keeping a counter to indicate the current depth and not continuing deeper when the maximum is reached. Note that this is a case where the code can potentially be tidied up using a Reader-style monad, where each recursive call is wrapped in something like local (subtract 1).

The timeout function in the base package allows you to kill a computation after a certain period. Interleaving timeout with a stream of increasingly deeper results, such that the most recent result is stored in an MVar is a relatively common trick for search problems in Haskell.

You can also use a lazy writer monad for your traversal, generating a list of improving answers. Now you've simplified your problem somewhat, to just taking the first "good enough" or "best so far" result from the list by some criteria. On top of that you can use the timeout trick that dons described, or any other approach you think appropriate...

Related

How does lazy-evaluation allow for greater modularization?

In his article "Why Functional Programming Matters," John Hughes argues that "Lazy evaluation is perhaps the most powerful tool for modularization in the functional programmer's repertoire." To do so, he provides an example like this:
Suppose you have two functions, "infiniteLoop" and "terminationCondition." You can do the following:
terminationCondition(infiniteLoop input)
Lazy evaluation, in Hughes' words "allows termination conditions to be separated from loop bodies." This is definitely true, since "terminationCondition" using lazy evaluation here means this condition can be defined outside the loop -- infiniteLoop will stop executing when terminationCondition stops asking for data.
But couldn't higher-order functions achieve the same thing as follows?
infiniteLoop(input, terminationCondition)
How does lazy evaluation provide modularization here that's not provided by higher-order functions?
Yes you could use a passed in termination check, but for that to work the author of infiniteLoop would have had to forsee the possibility of wanting to terminate the loop with that sort of condition, and hardwire a call to the termination condition into their function.
And even if the specific condition can be passed in as a function, the "shape" of it is predetermined by the author of infiniteLoop. What if they give me a termination condition "slot" that is called on each element, but I need access to the last several elements to check some sort of convergence condition? Maybe for a simple sequence generator you could come up with "the most general possible" termination condition type, but it's not obvious how to do so and remain efficient and easy to use. Do I repeatedly pass the entire sequence so far into the termination condition, in case that's what it's checking? Do I force my callers to wrap their simple termination conditions up in a more complicated package so they fit the most general condition type?
The callers certainly have to know exactly how the termination condition is called in order to supply a correct condition. That could be quite a bit of dependence on this specific implementation. If they switch to a different implementation of infiniteLoop written by another third party, how likely is it that exactly the same design for the termination condition would be used? With a lazy infiniteLoop, I can drop in any implementation that is supposed to produce the same sequence.
And what if infiniteLoop isn't a simple sequence generator, but actually generates a more complex infinite data structure, like a tree? If all the branches of the tree are independently recursively generated (think of a move tree for a game like chess) it could make sense to cut different branches at different depths, based on all sorts of conditions on the information generated thus far.
If the original author didn't prepare (either specifically for my use case or for a sufficiently general class of use cases), I'm out of luck. The author of the lazy infiniteLoop can just write it the natural way, and let each individual caller lazily explore what they want; neither has to know much about the other at all.
Furthermore, what if the decision to stop lazily exploring the infinite output is actually interleaved with (and dependent on) the computation the caller is doing with that output? Think of the chess move tree again; how far I want to explore one branch of the tree could easily depend on my evaluation of the best option I've found in other branches of the tree. So either I do my traversal and calculation twice (once in the termination condition to return a flag telling infinteLoop to stop, and then once again with the finite output so I can actually have my result), or the author of infiniteLoop had to prepare for not just a termination condition, but a complicated function that also gets to return output (so that I can push my entire computation inside the "termination condition").
Taken to extremes, I could explore the output and calculate some results, display them to a user and get input, and then continue exploring the data structure (without recalling infiniteLoop based on the user's input). The original author of the lazy infiniteLoop need have no idea that I would ever think of doing such a thing, and it will still work. If we've got purity enforced by the type system, then that would be impossible with the passed-in termination condition approach unless the whole infiniteLoop was allowed to have side effects if the termination condition needs to (say by giving the whole thing a monadic interface).
In short, to allow the same flexibility you'd get with lazy evaluation by using a strict infiniteLoop that takes higher order functions to control it can be a large amount of extra complexity for both the author of infiniteLoop and its caller (unless a variety of simpler wrappers are exposed, and one of them matches the caller's use case). Lazy evaluation can allow producers and consumers to be almost completely decoupled, while still giving the consumer the ability to control how much output the producer generates. Everything you can do that way you can do with extra function arguments as you say, but it requires to the producer and consumer to essentially agree on a protocol for how the control functions work; and that protocol is almost always either specialised to the use case at hand (tying the consumer and producer together) or so complicated in order to be fully-general that the producer and consumer are up tied to that protocol, which is unlikely to be recreated elsewhere, and so they're still tied together.

Time Complexity for index and drop of first item in Data.Sequence

I was recently working on an implementation of calculating moving average from a stream of input, using Data.Sequence. I figured I could get the whole operation to be O(n) by using a deque.
My first attempt was (in my opinion) a bit more straightforward to read, but not a true a deque. It looked like:
let newsequence = (|>) sequence n
...
let dropFrontTotal = fromIntegral (newtotal - index newsequence 0)
let newsequence' = drop 1 newsequence.
...
According to the hackage docs for Data.Sequence, index should take O(log(min(i,n-i))) while drop should also take O(log(min(i,n-i))).
Here's my question:
If I do drop 1 someSequence, doesn't this mean a time complexity of O(log(min(1, (length someSequence)))), which in this case means: O(log(1))?
If so, isn't O(log(1)) effectively constant?
I had the same question for index someSequence 0: shouldn't that operation end up being O(log(0))?
Ultimately, I had enough doubts about my understanding that I resorted to using Criterion to benchmark the two implementations to prove that the index/drop version is slower (and the amount it's slower by grows with the input). The informal results on my machine can be seen at the linked gist.
I still don't really understand how to calculate time complexity for these operations, though, and I would appreciate any clarification anyone can provide.
What you suggest looks correct to me.
As a minor caveat remember that these are amortized complexity bounds, so a single operation could require more than constant time, but a long chain of operations will only require a constant times the number of the chain.
If you use criterion to benchmark and "reset" the state at every computation, you might see non-constant time costs, because the "reset" is preventing the amortization. It really depends on how you perform the test. If you start from a sequence an perform a long chain of operations on that, it should be OK. If you repeat many times a single operation using the same operands, then it could be not OK.
Further, I guess bounds such as O(log(...)) should actually be read as O(log(1 + ...)) -- you can't realistically have O(log(1)) = O(0) or, worse O(log(0))= O(-inf) as a complexity bound.

Iterative octree traversal

I am not able to figure out the procedure for iterative octree traversal though I have tried approaching it in the way of binary tree traversal. For my problem, I have octree nodes having child and parent pointers and I would like to iterate and only store the leaf nodes in the stack.
Also, is going for iterative traversal faster than recursive traversal?
It is indeed like binary tree traversal, but you need to store a bit of intermediate information. A recursive algorithm will not be slower per se, but use a bit more stack space for O(log8) recursive calls (about 10 levels for 1 billion elements in the octree).
Iterative algorithms will also need the same amount of space to be efficient, but you can place it into the heap it you are afraid that your stack might overflow.
Recursively you would do (pseudocode):
function traverse_rec (octree):
collect value // if there are values in your intermediate nodes
for child in children:
traverse_rec (child)
The easiest way to arrive at an iterative algorithm is to use a stack or queue for depth first or breath first traversal:
function traverse_iter_dfs(octree):
stack = empty
push_stack(root_node)
while not empty (stack):
node = pop(stack)
collect value(node)
for child in children(node):
push_stack(child)
Replace the stack with a queue and you got breath first search. However, we are storing something in the region of O(7*(log8 N)) nodes which we are yet to traverse. If you think about it, that's the lesser evil though, unless you need to traverse really big trees. The only other way is to use the parent pointers, when you are done in a child, and then you need to select the next sibling, somehow.
If you don't store in advance the index of the current node (in respect to it's siblings) though, you can only search all the nodes of the parent in order to find the next sibling, which essentially doubles the amount of work to be done (for each node you don't just loop through the children but also through the siblings). Also, it looks like you at least need to remember which nodes you visited already, for it is in general undecidable whether to descend farther down or return back up the tree otherwise (prove me wrong somebody).
All in all I would recommend against searching for such a solution.
Depends on what your goal is. Are you trying to find whether a node is visible, if a ray will intersect its bounding box, or if a point is contained in the node?
Let's assume that you are doing the last one, checking if a point is/should be contained in the node. I would add a method to the Octnode that takes a point and checks whether or not it lies within the bounding box of the Octnode. If it does return true, else false, pretty simple. From here, call a drill down method that starts at your head node and check each child, simple "for" loop, to see which Octnode it lies in, it can at most be one.
Here is where your iterative vs recursive algorithm comes into play. If you want iterative, just store the pointer to the current node, and swap this pointer from the head node to the one containing your point. Then just keep drilling down till you reach maximal depth or don't find an Octnode containing it. If you want a recursive solution, then you will call this drill down method on the Octnode that you found the point in.
I wouldn't say that iterative versus recursive has much performance difference in terms of speed, but it could have a difference in terms of memory performance. Each time you recurse you add another call depth onto the stack. If you have a large Octree this could result in a large number of calls, possibly blowing your stack.

Do we care about the 'past' in FRP?

When toying around with implementing FRP one thing I've found that is confusing is what to
do with the past? Basically, my understanding was that I would be able to do this with a Behaviour at any point:
beh.at(x) // where time x < now
This seems like it could be problematic performance wise in a case such as this:
val beh = Stepper(0, event) // stepwise behaviour
Here we can see that to evaluate the Behaviour in the past we need to keep all the Events and we will end up performing (at worst) linear scans each time we sample.
Do we want this ability to be available or should Behaviours only be allowed to be evaluated at a time >= now? Do we even want to expose the at function to the programmer?
While a behaviour is considered to be a function of time, reliance on an arbitrary amount of past data in FRP is a Bad Thing, and is referred to as a time leak. That is, transformations on behaviours should generally be streaming/reactive in that they do not rely on more than a bounded amount of the past (and should accumulate this knowledge of the history explicitly).
So, no, at is not desirable in a real FRP system: it should not be possible to look at either the past or the future. (The latter is, of course, impossible, if the state of the future depends on anything external to the FRP system.)
Of course, this leads to the problem that only being able to look at the exact present severely restricts what you can do when writing a function to transform behaviours: Behaviour a -> Behaviour b becomes the same as a -> b, which makes many things we'd like to do impossible. But this is more an issue of finding a semantics, one of FRP's persistent problems, than anything else; as long as the primitive transformations on behaviours you provide are powerful enough without causing time leaks, everything should be fine. For more information on this, see Garbage collecting the semantics of FRP.

Haskell random number generation

What's the best way to handle random number generation in Haskell (or what are the tradeoffs)?
I haven't really seen an authoritative answer.
Consider: minimizing the impact on otherwise pure functions, how / when to seed, performance, thread safety
IMHO, the best idea is to keep the generator in a strict state record. Then you can use the ordinary do-Syntax to work with the generator. Seeding is done only once - at the beginning of the main program (or at the beginning of each thread). You can avoid IO by using the split operation, which yields two random generators from one. (Different, of course).
As state is still pure, threadsafety can be guaranteed. Additionally, you can always escape state by giving a random generator to the function. This is useful for instance in case of automatic unit tests.

Categories

Resources