EDIT: while I'm still interested in an answer on the problems the execution faces in this case, it appears that it was indeed related to strictness since a -O fixes the execution and the program can handle the tree really quickly.
I'm currently working on the 67th problem of Project Euler.
I already solved it using simple lists and dynamic programming.
I'd like to solve it now using a tree datastructure (well, where a Node can have two parents so it's not really a tree). I thought I'd use a simple tree but would take care to craft it so that Nodes are shared when appropriate:
data Tree a = Leaf a | Node a (Tree a) (Tree a) deriving (Show, Eq)
Solving the problem is then just a matter of going through the tree recursively:
calculate :: (Ord a, Num a) => Tree a => a
calculate (Node v l r) = v + (max (calculate l) (calculate r))
calculate (Leaf v) = v
Obviously this has exponential time complexity though. So I tried to memoize the results with :
calculate :: (Ord a, Num a) => Tree a => a
calculate = memo go
where go (Node v l r) = v + (max (calculate l) (calculate r))
go (Leaf v) = v
where memo comes from Stable Memo. Stable Memo is supposed to memoize based on whether or not it has seen the exact same arguments (as in, same in memory).
So I used ghc-vis to see if my tree was correctly sharing nodes to avoid recomputation of things already computed in another branch.
On the sample tree produced by my function as such: lists2tree [[1], [2, 3], [4, 5, 6]], it returns the following correct sharing:
(source: crydee.eu)
Here we can see that the node 5 is shared.
Yet it seems that my tree in the actual Euler Problem isn't getting memoized correctly.
The code is available on github, but I guess that apart from the calculate method above, the only other important method is the one that creates the tree. Here it is:
lists2tree :: [[a]] -> Tree a
lists2tree = head . l2t
l2t :: [[a]] -> [Tree a]
l2t (xs:ys:zss) = l2n xs ts t
where (t:ts) = l2t (ys:zss)
l2t (x:[]) = l2l x
l2t [] = undefined
l2n :: [a] -> [Tree a] -> Tree a -> [Tree a]
l2n (x:xs) (y:ys) p = Node x p y:l2n xs ys y
l2n [] [] _ = []
l2n _ _ _ = undefined
l2l :: [a] -> [Tree a]
l2l = map (\l -> Leaf l)
It basically goes through the list of lists two rows at a time and then creates nodes from bottom to top recursively.
What is wrong with this approach? I thought it might that the program will still produce a complete tree parse in thunks before getting to the leaves and hence before memoizing, avoiding all the benefits of memoization but I'm not sure it's the case. If it is, is there a way to fix it?
This doesn't really address the original question, but I find it is usually easier and more powerful to use explicit memoization.
I chose to store the triangle as a list indexed by a position rather than a tree:
[ ((1,1),3),
((2,1),7), ((2,2), 4),
....
Suppose that part of the result has already been memoized in a list of this format. Then computing the answer at a particular coordinate is trivial:
a # i = let Just v = lookup i a in v
compute tree result (x,y) = tree # (x,y) + max (result # (x+1,y)) (result # (x+1,y+1))
Now we must build result. This is also trivial; all we have to do is map compute over all valid indices.
euler67 :: [((Int, Int), Integer)] -> Integer
euler67 tree = result # (1,1)
where
xMax = maximum $ map (fst . fst) tree
result = [ ((x,y), compute (x,y)) | x <- [1 .. xMax], y <- [1..x] ]
++ [ ((xMax + 1,y),0) | y <- [1..xMax + 1]]
compute (x,y) = tree # (x,y) + max (result # (x+1,y)) (result # (x+1,y+1))
Computing height of the triangle (xMax) is just getting the maximum x-index. Of course we are assuming that the tree is well formed.
The only remotely complicated part is determining which indices are valid for result. Obviously we need 1 row for every row in the original tree. Row x will have x items. We also add an extra row of zeroes at the bottom - we could handle the base case in a special way in compute but it is probably easier this way.
You'll notice that is is quite slow for the hundred row triangle. This is because lookup is traversing three lists per call to compute. To speed it up I used arrays:
euler67' :: Array (Int, Int) Integer -> Integer
euler67' tree = result ! (1,1)
where
((xMin, yMin), (xMax, yMax)) = bounds tree
result = accumArray (+) 0 ((xMin, yMin), (xMax + 1, yMax + 1)) $
[ ((x,y), compute (x,y)) | x <- [xMin .. xMax], y <- [yMin..x] ]
++ [ ((xMax + 1,y),0) | y <- [yMin..xMax + 1]]
compute (x,y) = tree ! (x,y) + max (result ! (x+1,y)) (result ! (x+1,y+1))
Also here is the code I used for reading the files:
readTree' :: String -> IO (Array (Int, Int) Integer)
readTree' path = do
tree <- readTree path
let
xMax = maximum $ map (fst . fst) tree
yMax = maximum $ map (snd . fst) tree
return $ array ((1,1), (xMax,yMax)) tree
readTree :: String -> IO [((Int, Int), Integer)]
readTree path = do
s <- readFile path
return $ map f $ concat $ zipWith (\n xs -> zip (repeat n) xs) [1..] $ map (zip [1..] . map read . words) $ lines s
where
f (a, (b, c)) = ((a,b), c)
Related
Did I implement inorder level-order tree transversal using tail-recursion correctly?
inorder (Leaf n) temp = n:temp
inorder (Node (n, left, right)) temp = inorder left (n:inorder right temp)
inorder :: Tree a -> [a] -> [a]
Tree is declared as
data Tree a = Leaf a | Node (a, Tree a, Tree a) deriving Show
and returns
[2,1,3] on call inorder three [] where three = Node (1, Leaf 2, Leaf 3)
This technically isn't tail recursive because you have a recursive call inorder right temp in a nontail position. One way to fix this would be with continuations. You write a function which takes an accumulator like before, but rather than the accumulator being just a list it's actually a function representing the work left to do in the computation. This means that instead of making a non-tail call and just returning, we can always tail call because the context we need is saved to the continuation.
inorder = go id
where go :: ([a] -> r) -> Tree a -> r
go k Leaf = k []
go k (Node a l r) = go l (\ls -> go r (\rs -> k $ ls ++ n : rs))
Here every call is a tail call as required but it's quite innefficient because it requires a ++ operation at every level, pushing us into quadratic costs. A more efficient algorithm would avoid building up an explicit list and instead build up a difference list, delaying the construction on the concrete structure and giving a more efficient algorithm
type Diff a = [a] -> [a] -- A difference list is just a function
nil :: Diff a
nil xs = xs
cons :: a -> Diff a -> Diff a
cons a d = (:) a . d
append :: Diff a -> Diff a -> Diff a
append xs ys = xs . ys
toList :: Diff a -> a
toList xs = xs []
Note that all of these operations are O(1) except for toList which is O(n) in the number of entries. The important point here is that diff lists are cheap and easy to append so we'll construct these in our algorithm and construct the concrete list at the very end
inorder = go toList
where go :: (Diff a -> r) -> Tree a -> r
go k Leaf = k nil
go k (Node a l r) =
go l (\ls -> go r (\rs -> k $ ls `append` cons n rs))
And now, through gratuitous application of functions we've gotten a completely unidiomatic Haskell program. You see in Haskell we don't really care about tail calls because we generally want to handle infinite structures correctly and that's not really possible if we demand everything be tail recursive. In fact, I would say that while not tail recursive, the code you originally had is the most idiomatic, that's even how it's implemented in Data.Set! It has the property that we can lazily consume the result of that toList and it will work with us and lazily process the tree. So in your implementation, something like
min :: Tree a -> a
min = listToMaybe . toList
is going to be pretty darn close to how you would implement it by hand efficiency wise! It will not construct traverse the whole tree first like my version will have to. These sort of compositional effects of laziness pay more dividends in real Haskell code than syntactically making our code use only tail calls (which does nothing to actually guarantee space usage anyways).
In Haskell, one can do filters, sums, etc on infinite lists in constant space, because Haskell only produces list nodes when needed, and garbage collects ones it's finished with.
I'd like this to work with infinite trees.
Below is a rather silly program that generates a infinite binary tree with nodes representing the natural numbers.
I've then written a function that does a depth first traversal of this tree, spitting out the nodes at a particular level.
Then I've done a quick sum on the nodes divisable by 5.
In theory, this algorithm could be implemented in O(n) space for an n depth tree of O(2^n) nodes. Just generate the tree on the fly, removing the nodes you've already completed processing.
Haskell does generate the tree on the fly, but doesn't garbage collect the nodes it seems.
Below is the code, I'd like to see code with a similar effect but that doesn't require O(2^n) space.
import Data.List (foldl')
data Tree = Tree Int Tree Tree
tree n = Tree n (tree (2 * n)) (tree (2 * n + 1))
treeOne = tree 1
depthNTree n x = go n x id [] where
go :: Int -> Tree -> ([Int] -> [Int]) -> [Int] -> [Int]
go 0 (Tree x _ _) acc rest = acc (x:rest)
go n (Tree _ left right) acc rest = t2 rest where
t1 = go (n - 1) left acc
t2 = go (n - 1) right t1
main = do
x <- getLine
print . foldl' (+) 0 . filter (\x -> x `rem` 5 == 0) $ depthNTree (read x) treeOne
Your depthNTree uses 2^n space because you keep the left subtree around through t1 while you're traversing the right subtree. The recursive call on the right subtree should contain no reference to the left, as a necessary condition for incrementally garbage collected traversals.
The naive version works acceptably in this example:
depthNTree n t = go n t where
go 0 (Tree x _ _) = [x]
go n (Tree _ l r) = go (n - 1) l ++ go (n - 1) r
Now main with input 24 uses 2 MB space, while the original version used 1820 MB. The optimal solution here is similar as above, except it uses difference lists:
depthNTree n t = go n t [] where
go 0 (Tree x _ _) = (x:)
go n (Tree _ l r) = go (n - 1) l . go (n - 1) r
This isn't much faster than the plain list version in many cases, because with tree-depths around 20-30 the left nesting of ++ isn't very costly. The difference becomes more pronounced if we use large tree depths:
print $ sum $ take 10 $ depthNTree 1000000 treeOne
On my computer, this runs in 0.25 secs with difference lists and 1.6 secs with lists.
I am trying to construct a lazy data structure that holds an infinite bitmap. I would like to support the following operations:
true :: InfBitMap
Returns an infinite bitmap of True, i.e. all positions should have value True.
falsify :: InfBitMap -> [Int] -> InfBitMap
Set all positions in the list to False. The list is possible infinite. For example, falsify true [0,2..] will return a list where all (and only) odd positions are True.
check :: InfBitMap -> Int -> Bool
Check the value of the index.
Here is what I could do so far.
-- InfBitMap will look like [(#), (#, #), (#, #, #, #)..]
type InfBitMap = [Seq Bool]
true :: InfBitMap
true = iterate (\x -> x >< x) $ singleton True
-- O(L * log N) where N is the biggest index in the list checked for later
-- and L is the length of the index list. It is assumed that the list is
-- sorted and unique.
falsify :: InfBitMap -> [Int] -> InfBitMap
falsify ls is = map (falsify' is) ls
where
-- Update each sequence with all indices within its length
-- Basically composes a list of (update pos False) for all positions
-- within the length of the sequence and then applies it.
falsify' is l = foldl' (.) id
(map ((flip update) False)
(takeWhile (< length l) is))
$ l
-- O(log N) where N is the index.
check :: InfBitMap -> Int -> Bool
check ls i = index (fromJust $ find ((> i) . length) ls) i
I am wondering if there is some Haskellish concept/data-structure that I am missing that would make my code more elegant / more efficient (constants do not matter to me, just order). I tried looking at Zippers and Lenses but they do not seem to help. I would like to keep the complexities of updates and checks logarithmic (maybe just amortized logarithmic).
Note: before someone suspects it, no this is not a homework problem!
Update:
It just occurred to me that check can be improved to:
-- O(log N) where N is the index.
-- Returns "collapsed" bitmap for later more efficient checks.
check :: InfBitMap -> Int -> (Bool, InfBitMap)
check ls i = (index l i, ls')
where
ls'#(l:_) = dropWhile ((<= i) . length) ls
Which can be turned into a Monad for code cleanliness.
A slight variation on the well-known integer trie seems to be applicable here.
{-# LANGUAGE DeriveFunctor #-}
data Trie a = Trie a (Trie a) (Trie a) deriving (Functor)
true :: Trie Bool
true = Trie True true true
-- O(log(index))
check :: Trie a -> Int -> a
check t i | i < 0 = error "negative index"
check t i = go t (i + 1) where
go (Trie a _ _) 1 = a
go (Trie _ l r) i = go (if even i then l else r) (div i 2)
--O(log(index))
modify :: Trie a -> Int -> (a -> a) -> Trie a
modify t i f | i < 0 = error "negative index"
modify t i f = go t (i + 1) where
go (Trie a l r) 1 = Trie (f a) l r
go (Trie a l r) i | even i = Trie a (go l (div i 2)) r
go (Trie a l r) i = Trie a l (go r (div i 2))
Unfortunately we can't use modify to implement falsify because we can't handle infinite lists of indices that way (all modifications have to be performed before an element of the trie can be inspected). Instead, we should do something more like a merge:
ascIndexModify :: Trie a -> [(Int, a -> a)] -> Trie a
ascIndexModify t is = go 1 t is where
go _ t [] = t
go i t#(Trie a l r) ((i', f):is) = case compare i (i' + 1) of
LT -> Trie a (go (2*i) l ((i', f):is)) (go (2*i+1) r ((i', f):is))
GT -> go i t is
EQ -> Trie (f a) (go (2*i) l is) (go (2*i+1) r is)
falsify :: Trie Bool -> [Int] -> Trie Bool
falsify t is = ascIndexModify t [(i, const False) | i <- is]
We assume strictly ascending indices in is, since otherwise we would skip places in the trie or even get non-termination, for example in check (falsify t (repeat 0)) 1.
The time complexities are a bit complicated by laziness. In check (falsify t is) index, we pay an additional cost of a constant log 2 index number of comparisons, and a further length (filter (<index) is) number of comparisons (i. e. the cost of stepping over all indices smaller than what we're looking up). You could say it's O(max(log(index), length(filter (<index) is)). Anyway, it's definitely better than the O(length is * log (index)) that we would get for a falsify implemented for finite is-es using modify.
We must keep in mind that tree nodes are evaluated once, and subsequent check-s for the same index after the first check are not paying any extra cost for falsify. Again, laziness makes this a bit complicated.
This falsify is also pretty well-behaved when we want to traverse a prefix of a trie. Take this toList function:
trieToList :: Trie a -> [a]
trieToList t = go [t] where
go ts = [a | Trie a _ _ <- ts]
++ go (do {Trie _ l r <- ts; [l, r]})
It's a standard breadth-first traversal, in linear time. The traversal time remains linear when we compute take n $ trieToList (falsify t is), since falsify incurs at most n + length (filter (<n) is) extra comparisons, which is at most 2 * n, assuming strictly increasing is.
(side note: the space requirement of breadth-first traversal is rather painful, but I can't see a simple way to help it, since iterative deepening is even worse here, because there the whole tree must be held in memory, while bfs only has to remember the bottom level of the tree).
One way to represent this is as a function.
true = const True
falsify ls is = \i -> not (i `elem` is) && ls i
check ls i = ls i
The true and falsify functions are nice and efficient. The check function can be as bad as linear. It's possible to improve the efficiency of the same basic idea. I like its elegance.
I am trying to construct a tree from pre/postoreder traversals . My tree type is below:
data Tree = Emptytree | Node Integer [Tree]
I am new in functional programming. So I come across with some difficulties while tying construct my base cases and recursion.My function will be something like this:
ListToTree :: [Integer] -> [Integer] -> Tree
I construct some algorithms but I can not make it to fit to Language requirements.
My algorithm is quite simple:I take each element of first list (preorder traversal list) then I check it position in the second list. Let me give an example:
1
/ \
2 5
/ \
3 4
Preorder of this Tree traversal is as you know [1,2,3,4,5]
Postorder of this Tree traversal is as you know [3,4,2,5,1]
Firstly I look to the first element of first list it is 1 then I look it is position it in 2nd list it is last so I add this to my tree. Then I check next element of tree it is 2 in the second list it on the left of 1 it means it is child of it. Then 3 is on the left of 2 (in the second list) it means it is also the son of 2 then i look to 4 it is on the left of 2 it is son of 2, and lastly 5 it is on the left of 1 it is child of one (because it is on the right of 2 it is not a child of 2).
I tried to implement it . I write helper function which determines if Node has a child or not. I use also counter in my function So actually my function is like this:
ListToTree :: Integer -> [Integer] -> [Integer] -> Tree
{-First Stands for counter ,2nd preorder, 3rd postorder-}
MY base condition are:
1. is about if list are Emptytree return Emptytree
2. is about if counter== length-1 return Node element [Emptytree]
My main problematic part is in my recursive part:
ListToTree counter a b
| hasChild b counter == 1 = Node ( Element ) [ListToTree (1+counter) a b]
| hasChild b counter == 0 = Node ( Element ) [Emptytree]
{-My problematic part if node has no Child what I must change here-}
{-Or what are your suggestions-}
I need help in improving my algorithm Any kind of help or comments will be highly appreciated.
The beautiful thing about haskell is that you don't usually need a counter. It is usually sufficient to just do patter matching.
I will give the solution for [Tree] since that requires less cases. If you want a solution for a single Tree you can just introduce some cases in a wrapper function.
listToTree :: [Integer] -> [Integer] -> [Tree]
listToTree [] [] = []
listToTree (x:xs) ys = go where
fstSubTreePost = takeWhile (/=x) ys -- all the elems of 1. subtree except x
fstSubTreeLength = length fstSubTreePost
fstSubTreePre = take fstSubTreeLength xs
-- this will recursively compute the first subtree
fstTree = Node x (listToTree fstSubTreePre fstSubTreePost)
-- the next line will recursively compute the rest of the subtrees
rest = listToTree (drop fstSubTreeLength xs) (drop (fstSubTreeLength+1) ys)
go = fstTree : rest
Given the pre-order and post-order are [Integer], there may be zero or one or many trees that returns these traversals. For instances the traversals [1,1,1] and [1,1,1] have two possible trees. With 'mLast' and 'splits' helper function, it is possible to define a short 'listToTrees' which handles possible 'Forest' parsings. Then it is easy to define 'listToTree' as a special case that produces possible single 'Tree' parsings.
module PPT where
import Data.List
data Tree a = Emptytree | Node a (Forest a)
deriving Show
-- | A list of sibling trees, in left to right order
type Forest a = [Tree a]
-- | Returns a list of all valid trees that produce the given pre-order and post-order traversals.
--
-- If the input cannot be parsed into a Tree then results is an empty list.
listToTree :: [Integer] -> [Integer] -> [Tree Integer]
listToTree [] [] = [Emptytree] -- base case
listToTree [] _ = [] -- detect length mismatch
listToTree (x:xs) yAll = case mLast yAll of
Just (ys, y) | x==y -> map (Node x) (listToTrees xs ys) -- pre-order start == post-order end
_ -> []
-- | Given pre-order and post-order traversals of a forest, return a list of possible parsings.
listToTrees :: [Integer] -> [Integer] -> [Forest Integer]
listToTrees [] [] = [ [] ] -- base case
listToTrees [] _ = [] -- detect length mismatch
listToTrees (x:xs) ys = concatMap build (splits x ys) -- for each copy of 'x' in ysAll
where
build (belowX', _x', rightOfX') =
let (belowX, rightOfX) = splitAt (length pre) xs
in [ Node x kids : sibs
| kids <- listToTrees belowX belowX'
, sibs <- listToTrees rightOfX rightOfX' ]
-- | Safely split a non-empty into the initial portion and the last portion
mLast :: [a] -> Maybe ([a], a)
mLast [] = Nothing
mLast ys = Just (init ys, last ys)
-- | At each position for the given element 'x', split the input list 'ys' into (pre, x, post)
-- portions. The output has a tuple for each copy of 'x' in the input list 'ys'.
--
-- This could be better optimized to avoid (++), or changed to a zipper
splits :: Eq a => a -> [a] -> [ ([a], a, [a]) ]
splits x ysIn = unfoldr go ([], ysIn)
where
go (pres, ys) =
case span (x /=) ys of
(_, []) -> Nothing
(pre, x':post) -> Just ((pres ++ pre, x', post), (pres++pre++[x'], post))
-- | test1 has a single possible parsing
test1 :: ([Integer], [Integer])
test1 = ( [1, 2, 3, 4, 5]
, [3, 4, 2, 5, 1] )
-- | test2 has two possible parsings
test2 :: ([Integer], [Integer])
test2 = ( [1, 2, 1, 2]
, [2, 1, 2, 1] )
main :: IO ()
main = do
mapM_ print (uncurry listToTree test1)
mapM_ print (uncurry listToTree test2)
I thought I was smooth sailing in my Haskell studies, until...
I have a [[Int]]
tiles = [[1,0,0]
,[0,1,0]
,[0,1,0]
]
and a data type:
data Coord = Coord
{ x :: Int
, y :: Int
} deriving (Eq)
Based on the input tiles, I've been trying to output a [Coord], such that a Coord is only generated when the value of tiles is 1, and the Coord will store it's position in the 2d list:
blackBox :: [[Int]] -> [Coord]
blackBox tiles = <magic>
-- given the above example I would expect:
-- [(Coord 0 0),(Coord 1 1),(Coord 1 2)]
I have tried things like first converting [[Int]] to a [Int], via:
foldTiles :: [[Int]] -> [Int]
foldTiles tiles = foldr (++) [] tiles
but after that I'm not really sure how to pass the indices along. I suppose if I could map over the "folded tiles", outputting a tuple (value, index), I could easily figure out the rest.
update In case anyone's interested, I got it working and here is a demo of it (with source code and link to GitHub)! I will have to take more time to understand each of the answers as this is my first time programming a game using FP. Thanks a lot!
http://kennycason.com/posts/2013-10-10-haskell-sdl-gameboy-boxxle.html
This is a place where list comprehensions shine.
blackBox tiles =
[Coord x y -- generate a Coord pair
| (y, row) <- enumerate tiles -- for each row with its coordinate
, (x, tile) <- enumerate row -- for each tile in the row (with coordinate)
, tile == 1] -- if the tile is 1
Or you could go for the equivalent do notation (since list is a monad), which requires importing Control.Monad (for guard.)
blackBox tiles = do
(y, row) <- enumerate tiles -- for each row with its coordinate
(x, tile) <- enumerate row -- for each tile in the row (with coordinate)
guard (tile == 1) -- as long as the tile is 1
return (Coord x y) -- return a coord pair
To aid with understanding, this latter function works like the following Python function.
def black_box(tiles):
for y, row in enumerate(tiles):
for x, tile in enumerate(row):
if tile == 1:
yield Coord(x, y)
do notation for the list monad is incredibly handy for processing lists, I think, so it's worth wrapping your head around!
In both of these examples I have used the definition
enumerate = zip [0..]
Here's a simple solution (not guarantee that it's viable for tiles of size 10000x10000, that's something for you to check ;)
The approach is, as usual in Haskell, a top-down development. You think: what should blackBox do? For every row of tiles it should collect the Coords of the tiles with 1 for that row, and concatenate them.
This gives you another function, blackBoxRow, for rows only. What should it do? Remove zeros from the row, and wrap the rest in Coords, so there's filter and then map. Also you want to keep the row and column numbers, so you map tiles joined with their respective coordinates.
This gives you:
tiles :: [[Int]]
tiles = [[1,0,0]
,[0,1,0]
,[0,1,0]
]
data Coord = Coord {
x :: Int
,y :: Int
} deriving (Eq, Show)
blackBox :: [[Int]] -> [Coord]
blackBox tiles2d = concat (map blackBoxRow (zip [0..] tiles2d))
blackBoxRow :: (Int, [Int]) -> [Coord]
blackBoxRow (row, tiles1d) = map toCoord $ filter pickOnes (zip [0..] tiles1d) where
pickOnes (_, value) = value == 1
toCoord (col, _) = Coord {x=col, y=row}
main = print $ blackBox tiles
Results in:
~> runhaskell t.hs
[Coord {x = 0, y = 0},Coord {x = 1, y = 1},Coord {x = 1, y = 2}]
The way I see it, you could put your 2D list through a series of transformations. The first one we'll need is one that can replace the 1 in your list with something more useful, such as its row:
assignRow :: Int -> [Int] -> [Int]
assignRow n xs = map (\x -> if x == 1 then n else x) xs
We can now use zipWith and [1..] to perform the first step:
assignRows :: [[Int]] -> [[Int]]
assignRows matrix = zipWith assignRow [1..] matrix
What's handy about this is that it'll work even if the matrix isn't square, and it terminates as soon as the matrix does.
Next we need to assign the column number, and here I'll do a few steps at once. This makes the tuples of the coordinates, but there are invalid ones where r == 0 (this is why I used [1..], otherwise, you'll lose the first row), so we filter them out. Next, we uncurry Coord to make a function that takes a tuple instead, and then we use flip on it, then map this thing over the list of tuples.
assignCol :: [Int] -> [Coord]
assignCol xs = map (uncurry (flip Coord)) $ filter (\(c, r) -> r /= 0) $ zip [1..] xs
And we can build our assignCols:
assignCols :: [[Int]] -> [Coord]
assignCols matrix = concatMap assignCol matrix
which allows us to build the final function
assignCoords :: [[Int]] -> [Coord]
assignCoords = assignCols . assignRows
You could compress this quite a bit with some eta reduction, too.
If you want 0-indexed coordinates, I'll leave you to modify this solution to do so.
Quick and dirty solution:
import Data.Maybe (mapMaybe)
data Coord = Coord {
x :: Int
,y :: Int
} deriving (Eq, Show)
blackBox :: [[Int]] -> [Coord]
blackBox = concatMap (\(y, xks) -> mapMaybe (toMaybeCoord y) xks)
. zip [0..] . map (zip [0..])
where
toMaybeCoord :: Int -> (Int, Int) -> Maybe Coord
toMaybeCoord y (x, k) = if k == 1
then Just (Coord x y)
else Nothing
The zips pair the the tile values (which I am referring to as k) with the x and y coordinates (we are dealing with lists, so we have to add the indices if we need them). mapMaybe is convenient so that we can map (in order to construct the Coords) and filter (to remove the zero tiles) in a single step. concatMap also does two things here: it maps a function (the anonymous function within the parentheses) generating a list of lists and then flattens it. Be sure to check the types of the intermediate functions and results to get a clearer picture of the transformations.
Here it is, using list comprehensions.
blackBox :: [[Integer]] -> [Coord]
blackBox ts = [Coord x y | (t,y) <- zip ts [0..], (e,x) <- zip t [0..], e == 1]
As long as we're collecting answers, here's another:
blackBox :: [[Int]] -> [Coord]
blackBox ts = map (uncurry Coord) xsAndYs
where
xsAndYs = concat $ zipWith applyYs [0..] x1s
applyYs i = map (flip (,) i)
x1s = map (map fst . filter ((==1) . snd)) xs
xs = map (zip [0..]) ts
Explanation:
This assigns the x indexes within each row:
xs = map (zip [0..]) ts
Then I filter each row to keep only the elements with a 1, and then I drop the 1 (since it's no longer useful):
x1s = map (map fst . filter ((==1) . snd)) xs
Which results in something of type [[Int]], which are the rows with xs where 1s used to be. Then I map the ys within each row, flipping the pairs so I'm left with (x,y) instead of (y,x). As a final step, I flatten the rows into a single list, since I don't need to keep them separate anymore:
xsAndYs = concat $ zipWith applyYs [0..] x1s
applyYs i = map (flip (,) i)
Finally I convert each element by mapping Coord over it. uncurry is necessary because Coord doesn't take a tuple as argument.