Is there an easy way to draw in Elm using the L-system? - svg

I found little information about drawing using the L-system in Elm. There is an L-system library but it is not up do date really, all I have found that could be useful is a turtle graphics library (https://github.com/mrdimosthenis/turtle-graphics) but I am unsure how to draw with it using the L-system. What I have got so far is just a way to generate the nth generation of a sentence from an axiom and a rule:
rule1 : LSystem.Rule State
rule1 state =
case state of
A ->
[ A, B ]
B ->
[ A ]
generateNTimes : Int -> (Rule State) -> List State -> List State
generateNTimes n rule axiom =
let aux state acc =
if n <= 0
then state
else generateNTimes (n - 1) rule ((apply rule state) ++ acc)
in
aux axiom []
I tried drawing using the turtle graphics library I mentioned above, but I am unable to see how I could use it together with the L-system.

Related

Palindrome and Danvy's remark on direct style

Here is some code deciding whether a list is a palindrome in n+1 comparisons, in "direct style"
pal_d1 :: Eq a => [a] -> Bool
pal_d1 l = let (r,_) = walk l l in r
where walk l [] = (True,l)
walk l (_:[]) = (True,tail l)
walk (x:l) (_:_:xs) = let (r, y:ys) = walk l xs
in (r && x == y, ys)
which can be tested on a few example
-- >>> pal_d1 [1,2,1]
-- True
-- >>> pal_d1 [1,2,2,1]
-- True
-- >>> pal_d1 [1,2,3,4,2,1]
-- False
Danvy claims in "There and back again" there is no direct style solution without a control operator (right before 4.2) due to the non linear use of the continuation in CPS style solution below :
pal_cps1 :: Eq a => [a] -> Bool
pal_cps1 l = walk l l (\_ -> trace "called" True)
where
walk l [] k = k l
walk l (_:[]) k = k (tail l)
walk (x:xs) (_:_:ys) k = walk xs ys (\(r:rs) -> x == r && k rs)
How is the first code not contradicting this assertion ?
(and how is the continuation not used linearly ?)
He does not claim that there is no solution without a control operator.
The continuation is not used linearly and therefore mapping this program back to direct style requires a control operator.
The context of the paper is to study systematic transformations between direct style and CPS, and the claim of that paragraph is that going back from CPS is tricky if the continuation is used in fancy ways.
With some effort you can wrangle it back into a nice shape, but the question remains, how might a compiler do that automatically?
(and how is the continuation not used linearly ?)
In the paper, the continuation is on the right of andalso (&&) so it's discarded if the left operand is False.
In operational semantics, you can view the continuation as an evaluation context, and in that view discarding the continuation corresponds to throwing an exception. One can certainly do it, but the point is that this requires extra machinery in the source language.
The CPS code (in the question's original version --- since edited by OP) seems faulty. Looks like it should be
walk (x:xs) (_:_:ys) k = walk xs ys (\(z:zs) -> x == z && k zs)
The non-CPS code starts the comparisons from the middle, and does n `div` 2 comparisons, for a list of length n. It continues testing even if a mismatch is discovered, so, is "linear".
The CPS code exits right away in such a case because (False && undefined) == False holds; so is "non-linear". The two are not equivalent so the first doesn't say anything about the second.
As the other answer is saying, not calling the continuation amounts to throwing an exception in a code without continuations, what the paper's author apparently calls "the direct [i.e., non-CPS(?) --wn] style".
(I haven't read the paper).
It isn't difficult at all to code the early-exiting solution in the "direct" style, by the way. We would just use the same turtle-and-hare trick to discover the halves while also building the first half in reverse, and then call and $ zipWith (==) first_half_reversed second_half in Haskell, or its equivalent short-circuiting direct recursive variant, in a strict language like e.g. Scheme.

Backtrack sudoku in haskell

I'm trying to build a backtrack sudoku solver in Haskell. But I'm stuck at one of the final points. I created a function called nextBoards which returns all possible soduko boards where the next empty place is filled in with the possible values. Now I'm trying to use backtracking in Haskell, but I cannot use things like while-loops and now I'm really confused on how to do this. I've made a sudoku solver before in Java, but I'm completely stuck on how to do it in Haskell for the moment.
-- Generate the next boards for a given board
nextBoards :: Board -> [Board]
nextBoards b =
let position = findFirstEmpty b
in [update z (snd position) (fst position) b | z <- options b (snd position) (fst position)]
-- We found the real solution
solve :: Board -> [Board] -> Board
solve focus options
| not (notEmpty focus) = focus
-- We hit a dead path try the next option
solve focus options
| solve (head options) (tail options)
-- We are solving the focus, generate the next boards
-- and save the rest in the options
solve focus options
| solve (head (nextBoards focus)) (tail (nextBoards focus))
I really don't know on how to proceed.
You could implement a backtracking solution over a (partial) solution space (like all possible Boards) S with a type signature like:
backtrack :: S -> [S]
With S the current state, and [S] the list of all valid boards. Now there are in general three possible options:
We have found a state that is solved, we return a list (singleton) containing our solution:
backtrack s | solved s = [s]
We have found a dead trail, in that case we no longer put effort into it, and return an empty list:
| invalid s = []
or it is possible we have to deploy our solution further, we generate children: states that have advanced one step from s, and call backtrack recursively, we return the concat of all the backtracks of these children:
| otherwise = concatMap backtrack $ children s
Or putting it all together:
backtrack :: S -> [S]
backtrack s | solved s = [s]
| invalid s = []
| otherwise = concatMap backtrack $ children s
The invalid guard can be omitted by simply generating an empty list of children for invalid states (as your solution will probably do) or by preventing to ever generate invalid Boards.
Now grounding this for your problem, generating children would come down to your nextBoards function:
children = nextBoards
or prettifying your function a bit:
children :: Board -> [Board]
children s = [update s y x v | v <- options s y x]
where (x,y) = findFirstEmpty s
Now the solve (or backtrack) method is defined as:
backtrack :: Board -> [Board]
backtrack s | not (notEmpty s) = [s]
| otherwise = concatMap backtrack $ children s
backtrack will generate a list of all valid solved Sudoku boards (where the originally places filled in, remain filled in). The list is generated lazily, so if you want one valid solution, you can simply use head:
solve :: Board -> Board
solve = head . backtrack

Haskell - Calculating the shortest path using trees

i am trying to write a code in haskell, that goes from point A, to point F, on a board game, that is essentially a Matrix, following the shortest path.
This is the board:
AAAA
ACCB
ADEF
*
0 0 N
The robot enters on the letter A, on the bottom (where it is the * ), and must reach F, on the bottom of the board are the coordinates, x=0, y=0, and pointing towards North. F coordinate is (3,0)
The trick is, it can't jump more than one letter, it can go from A to B, B to C, etc. and it can walk through the letters of the type (A to A, B to B, etc)
It can only move forward and make turns (Left, right) so the path to let me go to F would be
Forward, Forward, Right, Forward ,Forward, Forward, Right, Jump, Right, Jump, Forward, Left, Jump, Left, Forward, Forward
Once it reaches F, it's done.
I want to try this approach, using a Tree
A
/ \
A D
/ \
/ \
A C
/ \ / \
/ \ D C
A
/ \
/ \
A
/
/
A
/ \
B A
/ \
C F
After that i would only need to validate the correct path and shortest right?
Problem is , i don't have that much experience using trees.
Would you indicate any other way to get the best path?
Thank you very much .
We're going to solve this problem by searching a tree in three parts. First we will build a Tree representing the paths through the problem, with branches for each state. We'd like to find the shortest path to get to a state with a certain criteria, so we will write a breadth first search for searching any Tree. This won't be fast enough for the example problem you provided, so we will improve on the breadth first search with a transposition table which keeps track of states we have already explored to avoid exploring them again.
Building a Tree
We'll assume that your playing board is represented in an Array from Data.Array
import Data.Array
type Board = Array (Int, Int) Char
board :: Board
board = listArray ((1,1),(3,4)) ("AAAA" ++ "ACCB" ++ "ADEF")
Data.Array doesn't provide a default easy way to make sure indexes that we look up values for with ! are actually in the bounds of the Array. For convenience, we'll provide a safe version that returns Just v if the value is in the Array or Nothing otherwise.
import Data.Maybe
(!?) :: Ix i => Array i a -> i -> Maybe a
a !? i = if inRange (bounds a) i then Just (a ! i) else Nothing
The State of the puzzle can be represented by the combination of a position of the robot and the direction that the robot is facing.
data State = State {position :: (Int, Int), direction :: (Int, Int)}
deriving (Eq, Ord, Show)
The direction is a unit vector that can be added to the position to get a new position. We can rotate the direction vector left or right and moveTowards it.
right :: Num a => (a, a) -> (a, a)
right (down, across) = (across, -down)
left :: Num a => (a, a) -> (a, a)
left (down, across) = (-across, down)
moveTowards :: (Num a, Num b) => (a, b) -> (a, b) -> (a, b)
moveTowards (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
To explore a board, we will need to be able to determine from a state what moves are legal. To do this it'd be useful to name the moves, so we'll make a data type to represent the possible moves.
import Prelude hiding (Right, Left)
data Move = Left | Right | Forward | Jump
deriving (Show)
To determine what moves are legal on a board we need to know which Board we are using and the State of the robot. This suggests the type moves :: Board -> State -> Move, but we re going to be computing the new state after each move just to decide if the move was legal, so we will also return the new state for convenience.
moves :: Board -> State -> [(Move, State)]
moves board (State pos dir) =
(if inRange (bounds board) pos then [(Right, State pos (right dir)), (Left, State pos (left dir))] else []) ++
(if next == Just here then [(Forward, State nextPos dir)] else []) ++
(if next == Just (succ here) then [(Jump, State nextPos dir)] else [])
where
here = fromMaybe 'A' (board !? pos)
nextPos = moveTowards dir pos
next = board !? nextPos
If we're on the board, we can turn Left and Right; the restriction that we be on the board guarantees all the States returned by moves have positions that are on the board. If the value held at the nextPos, next position matches what is Just here we can go Forward to it (if we're off the board, we assume what is here is 'A'). If next is Just the successor of what is here we can Jump to it. If next is off the board it is Nothing and can't match either Just here or Just (succ here).
Up until this point, we've just provided the description of the problem and haven't touched on answering the question with tree. We are going to use the rose tree Tree defined in Data.Tree.
data Tree a = Node {
rootLabel :: a, -- ^ label value
subForest :: Forest a -- ^ zero or more child trees
}
type Forest a = [Tree a]
Each node of a Tree a holds a single value a and a list of branches which are each a Tree a.
We are going to build a list of Trees in a straightforward manner from our moves function. We are going to make each result of moves the rootLabel of a Node and make the branches be the list of Trees we get when we explore the new state.
import Data.Tree
explore :: Board -> State -> [Tree (Move, State)]
explore board = map go . moves board
where
go (label, state) = Node (label, state) (explore board state)
At this point, our trees are infinite; nothing keeps the robot from endlessly spinning in place.. We can't draw one, but we could if we could limit the tree to just a few steps.
limit :: Int -> Tree a -> Tree a
limit n (Node a ts)
| n <= 0 = Node a []
| otherwise = Node a (map (limit (n-1)) ts)
We'll display just the first couple levels of the tree when we start off the bottom left corner facing towards the board in State (4, 1) (-1, 0).
(putStrLn .
drawForest .
map (fmap (\(m, s) -> show (m, board ! position s)) . limit 2) .
explore board $ State (4, 1) (-1, 0))
(Forward,'A')
|
+- (Right,'A')
| |
| +- (Right,'A')
| |
| `- (Left,'A')
|
+- (Left,'A')
| |
| +- (Right,'A')
| |
| `- (Left,'A')
|
`- (Forward,'A')
|
+- (Right,'A')
|
+- (Left,'A')
|
`- (Forward,'A')
Breadth First Search
Breadth first search explores all the possibilities at one level (across the "breadth" of what is being searched) before descending into the next level (into the "depth" of what is being searched). Breadth first search finds the shortest path to a goal. For our trees, this means exploring everything at one layer before exploring any of what's in the inner layers. We'll accomplish this by making a queue of nodes to explore adding the nodes we discover in the next layer to the end of the queue. The queue will always hold nodes from the current layer followed by nodes from the next layer. It will never hold any nodes from the layer past that because we won't discover those nodes until we have moved on to the next layer.
To implement that, we need an efficient queue, so we'll use a sequence from Data.Sequence/
import Data.Sequence (viewl, ViewL (..), (><))
import qualified Data.Sequence as Seq
We start with an empty queue Seq.empty of nodes to explore and an empty path [] into the Trees. We add the initial possibilities to the end of the queue with >< (concatenation of sequences) and go. We look at the start of the queue. If there's nothing left, EmptyL, we didn't find a path to the goal and return Nothing. If there is something there, and it matches the goal p, we return the path we have accumulate backwards. If the first thing in the queue doesn't match the goal we add it as the most recent part of the path and add all of its branches to the remainder of what was queued.
breadthFirstSearch :: (a -> Bool) -> [Tree a] -> Maybe [a]
breadthFirstSearch p = combine Seq.empty []
where
combine queue ancestors branches =
go (queue >< (Seq.fromList . map ((,) ancestors) $ branches))
go queue =
case viewl queue of
EmptyL -> Nothing
(ancestors, Node a bs) :< queued ->
if p a
then Just . reverse $ a:ancestors
else combine queued (a:ancestors) bs
This lets us write our first solve for Boards. It's convenient here that all of the positions returned from moves are on the board.
solve :: Char -> Board -> State -> Maybe [Move]
solve goal board = fmap (map fst) . breadthFirstSearch ((== goal) . (board !) . position . snd) . explore board
If we run this for our board it never finishes! Well, eventually it will, but my back of a napkin calculation suggests it will take about 40 million steps. The path to the end of the maze is 16 steps long and the robot is frequently presented with 3 options for what to do at each step.
> solve 'F' board (State (4, 1) (-1, 0))
We can solve much smaller puzzles like
AB
AC
*
Which we can represent the board for this puzzle with
smallBoard :: Board
smallBoard = listArray ((1,1),(2,2)) ("AB" ++ "AC")
We solve it looking for 'C' starting in row 3 column 1 looking towards lower numbered rows.
> solve 'C' smallBoard (State (3, 1) (-1, 0))
Just [Forward,Forward,Right,Jump,Right,Jump]
Transposition Table
Certainly this problem must be easier to solve than exploring 40 million possible paths. Most of those paths consist of spinning in place or randomly meandering back and forth. The degenerate paths all share one property, they keep visiting states they have already visited. In the breadthFirstSeach code, those paths keep adding the same nodes to the queue. We can get rid of all of this extra work just by remembering the nodes that we've already seen.
We'll remember the set of nodes we've already seen with a Set from Data.Set.
import qualified Data.Set as Set
To the signature of breadthFirstSearch we'll add a function from the label for a node to a representation for the branches of that node. The representation should be equal whenever all the branches out of the node are the same. In order to quickly compare the representations in O(log n) time with a Set we require that the representation have an Ord instance instead of just equality. The Ord instance allows Set to check for membership with binary search.
breadthFirstSearchUnseen:: Ord r => (a -> r) -> (a -> Bool) -> [Tree a] -> Maybe [a]
In addition to keeping track of the queue, breadthFirstSearchUnseen keeps track of the set of representations that have been seen, starting with Set.empty. Each time we add branches to the queue with combine we also add the representations to seen. We only add the unseen branches whose representations are not in the set of branches we've already seen.
breadthFirstSearchUnseen repr p = combine Set.empty Seq.empty []
where
combine seen queued ancestors unseen =
go
(seen `Set.union` (Set.fromList . map (repr . rootLabel) $ unseen))
(queued >< (Seq.fromList . map ((,) ancestors ) $ unseen))
go seen queue =
case viewl queue of
EmptyL -> Nothing
(ancestors, Node a bs) :< queued ->
if p a
then Just . reverse $ ancestors'
else combine seen queued ancestors' unseen
where
ancestors' = a:ancestors
unseen = filter (flip Set.notMember seen . repr . rootLabel) bs
Now we can improve our solve function to use breadthFirstSearchUnseen. All of the branches from a node are determined by the State - the Move label that got to that state is irrelevant - so we only use the snd part of the (Move, State) tuple as the representation for a node.
solve :: Char -> Board -> State -> Maybe [Move]
solve goal board = fmap (map fst) . breadthFirstSearchUnseen snd ((== goal) . (board !) . position . snd) . explore board
We can now solve the original puzzle very quickly.
> solve 'F' board (State (4, 1) (-1, 0))
Just [Forward,Forward,Forward,Right,Forward,Forward,Forward,Right,Jump,Right,Jump,Forward,Left,Jump,Left,Jump,Jump]

Return all reachable states in a DFA starting from the initial state

I'm trying to write a program in Haskell that returns a list of reachable states starting from the initial state, similar to a depth first search.
states_reachable :: Eq st => DFA st -> [st]
states_reachable (qs, sigma, delta, s, inF) =
[delta q a | q <- qs, a <- sigma]
Note:
qs is the set of states
sigma is the alphabet
delta is a transition function that takes a state and a symbol and returns the next state
s is the initial state
inF is a function that returns true if the state is an accepting state, false otherwise
As the function is defined now:
[delta q a | q <- qs, a <- sigma]
returns all the states in the DFA (which is incorrect).
What I would like is to populate the list by starting at the initial state s and testing the delta function with each input.
For example:
// returns the states reachable from the initial state for all symbols in sigma
[delta s a | a <- sigma]
The next step would be to repeat the process for each new state added to the list. This might add duplicates, but I can remove them later.
Then I tried:
[delta q a | q <- [delta s a | a <- sigma], a <- sigma]
I thought this might work, but it doesn't because it's creating the inner list and then using it to populate the outer list and then stopping.
I need to recursively build this list by exploring all the new states returned by the delta function.
You're attempting to compute the transitive closure of a relation here, where the relation is "x can get to y in one step". As such, I'd suggest using a generic transitive-closure solution, rather than a DFA-specific one, and then produce that relation from your DFA. Here's one fairly basic way to do it:
module Closure (closure) where
import Data.List (nub)
closure :: Eq a => (a -> [a]) -> a -> [a]
closure nexts init = go [init]
where go xs = let xs' = nub $ xs ++ (xs >>= nexts)
in if xs == xs'
then xs
else go xs'
The algorithm here is to have a list of reachable states, and at each step expand it by walking from each to all its nearest neighbors, and then nub the list to get rid of duplicates. Once that expansion step adds no new nodes, you're done.
Now, how to map your DFA problem onto this? It's not too hard: you just need to produce a nexts function using sigma and delta. Here we need to assume that your DFA delta function is total, ie that every node has a transition specified for every letter in sigma. This is easy enough by creating one additional "failure" node that all nodes can transition to if they don't like their input, so I'll just assume that's been done.
neighbors :: (node -> letter -> node) -> [letter] -> node -> [node]
neighbors delta sigma n = map (delta n) sigma
With that in place, your original problem reduces to:
closure (neighbors delta sigma) s

Reconstruct a graph from BFS output in Haskell

I want to reconstruct the incidence structure of a graph in Haskell, which is given by the output of a breadth first traversal of it. Explicitly, the output consists of a root vertex and a list of neighborhoods (a neighborhood is a list of vertices marked as new or old (= already visited)), where each neighborhood corresponds to the least vertex which has not been assigned to a neighborhood, yet.
In any imperative language, I would solve the problem by using a queue:
Input: root vertex r, list of neighborhoods L
(1) Put r into the empty queue Q
(2) if Q is empty then STOP
(3) extract the first vertex v of Q
(4) extract the first neighborhood N of L
(5) append the unvisited vertices of N to Q
(6) remove the markings (new/old) of the nodes of N and assign v to N
(7) goto (2)
I tried to implement this naive algorithm in Haskell (by using a list or by using Data.Sequence as queue), but ghci always runs out of memory. This should not happen, because although the input consists of 300MB data, 16GB RAM should clearly suffice.
Therefore the naive implementation seems to cause a memory leak. How would you implement this algorithm in Haskell?
Edit:
Here are the (slightly simplified) data types, I use:
data Output = Out !Vertex ![[BFSNode]]
data Vertex = Vertex Integer SomeMoreComplexData
data BFSNode = New Vertex | Old Integer
data Graph = ![Vertex] ![(Integer,[Integer])]
The data type "Output" contains the already parsed BFS output consisting of the root vertex and the lists of neighborhoods. BFSNode corresponds to a node in the BFS tree which belongs to either a new vertex which is visited for the first time, or to an old vertex which already has been visited and which is therefore referred by its unique number. Note that the parsing process works fine and consumes very few memory.
My aim is to convert "Output" into the data type "Graph" which consists of the lists of vertices and of an incidence list.
Here is a simplified version of my implementation:
readTree :: [[BFSNode]] -> Seq Integer -> Graph
readTree [] _ = Graph [] []
readTree (nb:nbs) qs =
let (i :< qs') = viewl qs
newVs = fromList $! map nodeNr . filter isNew $ nb
(Graph vs adj) = readTree nbs $ qs' >< newVs
in Graph (map unNew (filter isNew nb) ++ vs) ((i,nub $ map nodeNr nb):adj)
"nbs" is the list of neighborhoods, "qs" is the queue. The function "nodeNr" extracts the unique identification number from a vertex, "isNew" tests whether a vertex is new, and "unNew" unpacks a new vertex from the data type "BFSNode".
Edit2:
I think I localized the problem now. Maybe it has nothing to do with my implementation of the conversion process. My failure was to use the build in function "read" to read the data type "Output" from a file. I realized now that Haskell has problems with reading big files. Even if it were just about reading a list of integers, e.g.
main = do
txt <- readFile "test"
writeFile "test2" . show $ (read txt :: [Integer]) }
the program will run out of memory if the file "test" is big enough. I understand now, that it is no good idea to parse data in this way, since "read" will load all data into the memory before showing any output, but I still do not understand why it fills 16GB of RAM although the file amounts not even 500MB. Do you have any idea what is wrong with "read"? Does Haskell show the same behavior on your machines?
Edit3:
Now I implemented a stream based parsing function "readOutput" which takes a String and returns the data type "Output". This function is lazy, so I immediately get an output when I call it. But when I compose it with my conversion function "readTree" (which is clearly tail-recursive) I get no output at all and the memory usage increases as usual. What am I doing wrong?
Edit4:
The problem in Edit3 came from some strictifications which I removed now.
This question does not specify a key ingredient - how is the graph going to be represented in Haskell? Functional programs require carefully thought out data structures to maximize sharing and run efficiently. Usually, this means they're recursively built from nothing (inductive). There's a paper on inductive graphs and functional graph algorithmsā€ˇ that gives one representation:
module Test where
data Graph a = Empty | Extension (Graph a) [Int] (Int, a)
deriving Show
That is, a graph is either Empty, or a (smaller) graph extended by one node. This is exactly how lists are built using Cons in functional languages, except that the additional node has to specify the smaller graph, the predecessor links ([Int]), and the new node number and data, (Int,a). Note that they also implemented this as an abstract type ''for efficiency reasons.''
A graph with one node can be generated by extending the empty graph.
singleton :: (Int,a) -> Graph a
singleton x = Extension Empty [] x
Using this structure, it's simple to define a recursive parse algorithm for your BFS tree.
data Mark a = Visited Int | New (Int,a) deriving Show
parse :: (Int,a) -> [[Mark a]] -> Graph a
parse x nbrs = extend Empty [x] nbrs
extend :: Graph a -> [(Int,a)] -> [[Mark a]] -> Graph a
extend g [] [] = g
extend g _ [] = Empty -- leftover nodes, really an error.
extend g [] _ = Empty -- leftover neighborhoods, really an error.
extend g (x : tl) (nbr : nbrs) =
extend (Extension g (seen nbr) x) (news tl nbr) nbrs
news :: [(Int,a)] -> [Mark a] -> [(Int,a)]
news l (New x : tl) = news (uniq l x) tl
news l (_ : tl) = news l tl
news l [] = l
uniq :: [(Int,a)] -> (Int,a) -> [(Int,a)]
uniq (x:tl) y = x : if (fst x == fst y) then tl else uniq tl y
uniq [] y = [y]
seen :: [Mark a] -> [Int]
seen (Visited i : tl) = i : seen tl
seen (_ : tl) = seen tl
seen [] = []
m0 = [New (1,())]
m1 = [Visited 0, New (2,()), New (3,())]
m2 = [Visited 1, New (3,())]
m3 = [Visited 1, Visited 2]
nbrs = [m0,m1,m2,m3]
Testing it out,
$ ghci
GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> :load Test
[1 of 1] Compiling Test ( Test.hs, interpreted )
Ok, modules loaded: Test.
*Test> parse (0,()) nbrs
Extension (Extension (Extension (Extension Empty [] (0,())) [0] (1,())) [1] (2,())) [1,2] (3,())
For efficiency, you could do the following:
The news and seen functions could be combined let (ns,sn) = newseen nbr ([],[]) and made tail-recursive (passing their partially constructed lists and returning immediately) for efficiency.
Your input could keep track of the node at the center of each neighbor list. This would avoid the list concatenation in the stack of neighbors. Alternatively, you could use a functional dequeue to hold that stack.
If you haven't seen it, I'd recommend Okasaki's book on purely functional data structures.

Resources