Efficient table for Dynamic Programming in Haskell - haskell

I've coded up the 0-1 Knapsack problem in Haskell. I'm fairly proud about the laziness and level of generality achieved so far.
I start by providing functions for creating and dealing with a lazy 2d matrix.
mkList f = map f [0..]
mkTable f = mkList (\i -> mkList (\j -> f i j))
tableIndex table i j = table !! i !! j
I then make a specific table for a given knapsack problem
knapsackTable = mkTable f
where f 0 _ = 0
f _ 0 = 0
f i j | ws!!i > j = leaveI
| otherwise = max takeI leaveI
where takeI = tableIndex knapsackTable (i-1) (j-(ws!!i)) + vs!!i
leaveI = tableIndex knapsackTable (i-1) j
-- weight value pairs; item i has weight ws!!i and value vs!!i
ws = [0,1,2, 5, 6, 7] -- weights
vs = [0,1,7,11,21,31] -- values
And finish off with a couple helper functions for looking at the table
viewTable table maxI maxJ = take (maxI+1) . map (take (maxJ+1)) $ table
printTable table maxI maxJ = mapM_ print $ viewTable table maxI maxJ
This much was pretty easy. But I want to take it a step further.
I want a better data structure for the table. Ideally, it should be
Unboxed (immutable) [edit] never mind this
Lazy
Unbounded
O(1) time to construct
O(1) time complexity for looking up a given entry,
(more realistically, at worst O(log n), where n is i*j for looking up the entry at row i, column j)
Bonus points if you can explain why/how your solution satisfies these ideals.
Also bonus points if you can further generalize knapsackTable, and prove that it is efficient.
In improving the data structure you should try to satisfy the following goals:
If I ask for the solution where the maximum weight is 10 (in my current code, that would be indexTable knapsackTable 5 10, the 5 means include items 1-5) only the minimal amount of work necessary should be performed. Ideally this means no O(i*j) work for forcing the spine of each row of the table to necessary column length. You could say this isn't "true" DP, if you believe DP means evaluating the entirety of the table.
If I ask for the entire table to be printed (something like printTable knapsackTable 5 10), the values of each entry should be computed once and only once. The values of a given cell should depend on the values of other cells (DP style: the idea being, never recompute the same subproblem twice)
Ideas:
Data.Array bounded :(
UArray strict :(
Memoization techniques (SO question about DP in Haskell) this might work
Answers that make some compromises to my stated ideals will be upvoted (by me, anyways) as long as they are informative. The answer with the least compromises will probably be the "accepted" one.

First, your criterion for an unboxed data structure is probably a bit mislead. Unboxed values must be strict, and they have nothing to do with immutability. The solution I'm going to propose is immutable, lazy, and boxed. Also, I'm not sure in what way you are wanting construction and querying to be O(1). The structure I'm proposing is lazily constructed, but because it's potentially unbounded, its full construction would take infinite time. Querying the structure will take O(k) time for any particular key of size k, but of course the value you're looking up may take further time to compute.
The data structure is a lazy trie. I'm using Conal Elliott's MemoTrie library in my code. For genericity, it takes functions instead of lists for the weights and values.
knapsack :: (Enum a, Num w, Num v, Num a, Ord w, Ord v, HasTrie a, HasTrie w) =>
(a -> w) -> (a -> v) -> a -> w -> v
knapsack weight value = knapsackMem
where knapsackMem = memo2 knapsack'
knapsack' 0 w = 0
knapsack' i 0 = 0
knapsack' i w
| weight i > w = knapsackMem (pred i) w
| otherwise = max (knapsackMem (pred i) w)
(knapsackMem (pred i) (w - weight i)) + value i
Basically, it's implemented as a trie with a lazy spine and lazy values. It's bounded only by the key type. Because the entire thing is lazy, its construction before forcing it with queries is O(1). Each query forces a single path down the trie and its value, so it's O(1) for a bounded key size O(log n). As I already said, it's immutable, but not unboxed.
It will share all work in the recursive calls. It doesn't actually allow you to print the trie directly, but something like this should not do any redundant work:
mapM_ (print . uncurry (knapsack ws vs)) $ range ((0,0), (i,w))

Unboxed implies strict and bounded. Anything 100% Unboxed cannot be Lazy or Unbounded. The usual compromise is embodied in converting [Word8] to Data.ByteString.Lazy where there are unboxed chunks (strict ByteString) which are linked lazily together in an unbounded way.
A much more efficient table generator (enhanced to track individual items) could be made using "scanl", "zipWith", and my "takeOnto". This effectively avoid using (!!) while creating the table:
import Data.List(sort,genericTake)
type Table = [ [ Entry ] ]
data Entry = Entry { bestValue :: !Integer, pieces :: [[WV]] }
deriving (Read,Show)
data WV = WV { weight, value :: !Integer }
deriving (Read,Show,Eq,Ord)
instance Eq Entry where
(==) a b = (==) (bestValue a) (bestValue b)
instance Ord Entry where
compare a b = compare (bestValue a) (bestValue b)
solutions :: Entry -> Int
solutions = length . filter (not . null) . pieces
addItem :: Entry -> WV -> Entry
addItem e wv = Entry { bestValue = bestValue e + value wv, pieces = map (wv:) (pieces e) }
-- Utility function for improve
takeOnto :: ([a] -> [a]) -> Integer -> [a] -> [a]
takeOnto endF = go where
go n rest | n <=0 = endF rest
| otherwise = case rest of
(x:xs) -> x : go (pred n) xs
[] -> error "takeOnto: unexpected []"
improve oldList wv#(WV {weight=wi,value = vi}) = newList where
newList | vi <=0 = oldList
| otherwise = takeOnto (zipWith maxAB oldList) wi oldList
-- Dual traversal of index (w-wi) and index w makes this a zipWith
maxAB e2 e1 = let e2v = addItem e2 wv
in case compare e1 e2v of
LT -> e2v
EQ -> Entry { bestValue = bestValue e1
, pieces = pieces e1 ++ pieces e2v }
GT -> e1
-- Note that the returned table is finite
-- The dependence on only the previous row makes this a "scanl" operation
makeTable :: [Int] -> [Int] -> Table
makeTable ws vs =
let wvs = zipWith WV (map toInteger ws) (map toInteger vs)
nil = repeat (Entry { bestValue = 0, pieces = [[]] })
totW = sum (map weight wvs)
in map (genericTake (succ totW)) $ scanl improve nil wvs
-- Create specific table, note that weights (1+7) equal weight 8
ws, vs :: [Int]
ws = [2,3, 5, 5, 6, 7] -- weights
vs = [1,7,8,11,21,31] -- values
t = makeTable ws vs
-- Investigate table
seeTable = mapM_ seeBestValue t
where seeBestValue row = mapM_ (\v -> putStr (' ':(show (bestValue v)))) row >> putChar '\n'
ways = mapM_ seeWays t
where seeWays row = mapM_ (\v -> putStr (' ':(show (solutions v)))) row >> putChar '\n'
-- This has two ways of satisfying a bestValue of 8 for 3 items up to total weight 5
interesting = print (t !! 3 !! 5)

Lazy storable vectors: http://hackage.haskell.org/package/storablevector
Unbounded, lazy, O(chunksize) time to construct, O(n/chunksize) indexing, where chunksize can be sufficiently large for any given purpose. Basically a lazy list with some significant constant factor benifits.

To memoize functions, I recommend a library like Luke Palmer's memo combinators. The library uses tries, which are unbounded and have O(key size) lookup. (In general, you can't do better than O(key size) lookup because you always have to touch every bit of the key.)
knapsack :: (Int,Int) -> Solution
knapsack = memo f
where
memo = pair integral integral
f (i,j) = ... knapsack (i-b,j) ...
Internally, the integral combinator probably builds an infinite data structure
data IntTrie a = Branch IntTrie a IntTrie
integral f = \n -> lookup n table
where
table = Branch (\n -> f (2*n)) (f 0) (\n -> f (2*n+1))
Lookup works like this:
lookup 0 (Branch l a r) = a
lookup n (Branch l a r) = if even n then lookup n2 l else lookup n2 r
where n2 = n `div` 2
There are other ways to build infinite tries, but this one is popular.

Why won't you use Data.Map putting the other Data.Map into it? As far as I know it's quite fast.
It wouldn't be lazy though.
More than that, you can implement Ord typeclass for you data
data Index = Index Int Int
and put a two dimensional index directly as a key. You can achieve laziness by generating this map as a list and then just use
fromList [(Index 0 0, value11), (Index 0 1, value12), ...]

Related

Which recursive calls are allowed in Haskell array creation?

As a short exercise in using Haskell arrays I wanted to implement a function giving the first n (odd) prime numbers. The code below (compiled with GHC 7.10.3) produces a loop error at runtime. "A Gentle Introduction to Haskell" uses recursive calls in array creation to compute Fibonacci numbers (https://www.haskell.org/tutorial/arrays.html, 13.2, code below for reference), which works just fine. My question is:
Where is the difference between the two ways of recursive creation? Which recursive calls are generally allowed when creating arrays?
My code:
import Data.Array.Unboxed
main = putStrLn $ show $ (primes 500)!500 --arbitrary example
primes :: Int -> UArray Int Int
primes n = a
where
a = array (1,n) $ primelist 1 [3,5..]
primelist i (m:ms) =
if all (not . divides m) [ a!j | j <- [1..(i-1)]]
then (i ,m) : primelist (succ i) ms
else primelist i ms
divides m k = m `mod` k == 0
Code from "A Gentle Introduction to Haskell":
fibs :: Int -> Array Int Int
fibs n = a where a = array (0,n) ([(0, 1), (1, 1)] ++
[(i, a!(i-2) + a!(i-1)) | i <- [2..n]])
Thanks in advance for any answers!
Update: I think I finally understood what's going on. array is lazy on the list elements, but is unnecessarily strict on its spine!
This causes a <<loop>> exception, for instance
test :: Array Int Int
test = array (1,2) ((1,1) : if test!1 == 1 then [(2,2)] else [(2,100)])
unlike
test :: Array Int Int
test = array (1,2) ((1,1) : [(2, if test!1 == 1 then 2 else 100)])
So, recursion works as long as it only affects the values.
A working version:
main :: IO ()
main = do
putStrLn $ show $ (primes 500)!500 --arbitrary example
-- A spine-lazy version of array
-- Assumes the list carries indices lo..hi
arraySpineLazy :: (Int, Int) -> [(Int, a)] -> Array Int a
arraySpineLazy (lo,hi) xs = array (lo,hi) $ go lo xs
where
go i _ | i > hi = []
go i ~((_,e):ys) = (i, e) : go (succ i) ys
primes :: Int -> Array Int Int
primes n = a
where
a :: Array Int Int
a = arraySpineLazy (1,n) $ primelist 1 (2: [3,5..])
primelist :: Int -> [Int] -> [(Int, Int)]
primelist i _ | i > n = []
primelist _ [] = [] -- remove warnings
primelist i (m:ms) =
if all (not . divides m) [ a!j | j <- [1..(i-1)]]
then (i ,m) : primelist (succ i) ms
else primelist i ms
divides m k = m `mod` k == 0
Arguably, we should instead write a lazier variant of listArray instead, since our array variant discard the first components of the pair.
This is a strictness issue: you can't generate unboxed arrays recursively, only boxed (regular) ones, since only boxed ones have a lazy semantics.
Forget arrays, and consider the following recursive pair definition
let (x,y) = (0,x)
This defines x=0 ; y=0, recursively. However, for the recursion to work, it is necessary that the pair is lazy. Otherwise, it generates an infinite recursion, much as the following would do:
let p = case p of (x,y) -> (0,x)
Above, p evaluates itself before it can expose the (,) pair constructor, so an infinite loop arises. By comparison,
let p = (0, case p of (x,y) -> x)
would work, since p produces the (,) before calling itself. Note however that this relies on the constructor (,) not evaluating the components before returning -- it has to be lazy, and return immediately leaving the components to be evaluated later.
Operationally, a pair is constructed having inside tho thunks: two pointers to code, which will evaluate the result later on. Hence the pair is not really a pair of integers, but a pair of indirections-to-integer. This is called "boxing", and is needed to achieve laziness, even if it carries a little computational cost.
By definition, unboxed data structures, like unboxed arrays, avoid boxing, so they are strict, not lazy, and they can not support the same recursion approaches.

How to map a function over a specific range of a Vector in Haskell

I have a very large Vector a and I want to map a function over a small range. For example, if I have a Vector of size 1000, I want to map function f over values at indices 100-200 and then return the entire updated Vector. What is the best way to do this. I'm open to other data structures, but I would prefer not to use mutable vectors.
Edit:
Here's an equivalent in imperative code.
for (int i = startIdx; i < endIdx ; i++){
bigVector[i] = myFunction(bigVector[i]);
}
The immutable vector implementation of the "vector" package is basically an array. This means that any modifying operation on it requires the traversal of the whole array. That limitation doesn't play well with the requirements that you've mentioned.
Since you're saying that you're open to a choice of a different data-structure, the first thing you need to consider is a data-structure driven by the Array-Mapped Trie algorithm. That algorithm allows you to project on the slices of the data-structure, concatenate, drop/take, append/prepend - all in either a constant or logarithmic time. It's efficiency is proven in production with it driving the ubiquitous immutable vector data-structures of such languages as Clojure and Scala.
Fortunately, Haskell has an implementation as well. It's presented by the "persistent-vector" package, and here's how you can efficiently solve your problem using it:
mapOverSlice :: Int -> Int -> (a -> a) -> Vector a -> Vector a
mapOverSlice startIndex length mapper input =
let
(split1Heading, split1Trail) =
splitAt startIndex input
(split2Heading, split2Trail) =
splitAt (pred length) split1Trail
in
split1Heading <>
map mapper split2Heading <>
split2Trail
How about something like
maprange :: (a -> a) -> [a] -> Int -> Int -> [a]
maprange f vector skip nb = (take skip vector) ++ (map f (take nb (drop skip vector))) ++ (drop (skip + nb) vector)
and then use it like
*Main> maprange (\x -> 2*x) [1 .. 10] 3 4
[1,2,3,8,10,12,14,8,9,10]
to double the 4 elements after the first 3 elements.
One way to to this is
zip your original vector with [1..] to get a vector with indices
map over the zipped vector; if the index is within (skip, skip+nb] then apply f, else return the original value
The implementation is pretty straightforward:
maprange :: (a -> a) -> [a] -> Int -> Int -> [a]
maprange f vector skip nb = map (\(idx,v) -> if idx > skip && idx <= skip+nb then f v else v) zipped_vector
where zipped_vector = zip [1..] vector
The complexity should be O(n) where n is the size of the original vector (I'm not 100% sure about the complexity of zip, though); if you need something more efficient, you should investigate the data structures mentioned in the answer by Nikita Volkov.

Convert a function returning a list to a list of functions

I have a function f :: Int -> [a] which always returns a list of size n, like so:
f 0 = [a_0_1, a_0_2, ..., a_0_n]
f 1 = [a_1_1, a_1_2, ..., a_1_n]
.
.
.
f k = [a_k_1, a_k_2, ..., a_k_n]
I want to transform this into a list of functions:
[f_1, f_2, ..., f_n] :: [Int -> a]
where
f_k i = a_i_k
I hope the notation I've used is clear enough to convey what I want.
Use lambdas
fn = [(\k -> (f k)!!i) | i <- [0..n - 1]]
then
[f1, f2, f3, ...] = fn
but you should analyze your general problem (this approach is slow).
Complete sandbox code:
n = 5
f k = [2 * k + i | i <- [1..n]]
fn = [(\k -> (f k)!!i) | i <- [0..n - 1]]
(f1:f2:_) = fn
main = do
print $ f 4
print $ (fn!!3) 4
print $ f2 4
Let's start by actually applying the function to all these values:
listf = map f [0..k]
So
listf = [[a_0_1, a_0_2, ..., a_0_n]
,[a_1_1, a_1_2, ..., a_1_n]
,...
,[a_k_1, a_k_2, ..., a_k_n]]
Now we have a list of lists, but it's the wrong way around. So let's take the transpose:
listft = transpose listf
listft = [[a_0_1, a_1_1, ..., a_k_1]
,[a_0_2, a_1_2, ..., a_k_2]
,...
,[a_0_n, a_1_n, ..., a_k_n]]
Now we have a list of lists, and each of the inner lists represents an f_i for some i. But we don't want to stop with that representation, because actually calculating the jth element of a list is O(j). So let's use Vector, from Data.Vector, instead:
listmapsf = map fromList listft
Now we have a list of Vectors, each of which represents an f_i. We can turn those into functions using !:
functions = map (!) listmapsf
Note: this does absolutely nothing to verify that the input lists are all the same length. It's probably also not the most efficient way to produce the list of functions, but it's not a bad one.
Edit: per user3237465's suggestion, I've replaced the IntMap representation with a Vector one.
This is a good question, and deserves to be applied to an arbitrary function of general type a -> [b].
I actually made a related question about the monadic sequence function, regarding making an opposite, an unsequence function. It was shown to be impossible. Here's why:
When a function returns a list, it may be of an arbitrary length. This means that we can only determine the length of the returned list when we call the function. As a result, we cannot make a list of fixed length without calling the function.
A simpler way of understanding this is imagining that we've got values trapped in an IO monad. of type IO [Int]. This is perfectly comparable to a -> [Int] because both values can only be manipulated while they're kept inside their monadic type. This is different for the sequence :: Monad m => [m a] -> m [a] because the [a] monad can be deconstructed, i.e: it's a comonad too.
To put it simply, 'pure' monads can only be constructed, and thus you cannot take a list's length, which is fundamental if we are to construct a list, from inside a function monad, or and IO monad. I hope this helps you!
If you have
f 0 = [a_0_0, a_0_1 ... a_0_m]
f 1 = [a_1_0, a_1_1 ... a_1_m]
...
f n = [a_n_0, a_n_1 ... a_n_m]
then
matr = transpose $ map f [0..n]
gives you
[ [a_0_0, a_1_0 ... a_n_0]
, [a_0_1, a_1_1 ... a_n_1]
...
, [a_0_m, a_1_m ... a_n_m]
]
Your equation is f_j i = a_i_j, where i ranges over [0..n] and j ranges over [0..m].
Since, matr is transposed, the equation becomes f_j i = matr_j_i, which can be reflected as follows:
map (\j i -> matr !! j !! i) [0..]
The whole function is
transform f n = map (\j i -> matr !! j !! i) [0..] where
matr = transpose $ map f [0..n]
Or just
transform f n = map (!!) $ transpose $ map f [0..n]
EDIT
As #josejuan pointed out, this code is not efficient, since, while evaluating
(transform f n !! j) i
transpose forces f i' for all i' <= i.
If you don't need all of the values don't compute them until they are needed. You just need to define indexed access, i.e. let frc r c = f!!r!!c where r is the row and c is the column.

Infinite lazy bitmap

I am trying to construct a lazy data structure that holds an infinite bitmap. I would like to support the following operations:
true :: InfBitMap
Returns an infinite bitmap of True, i.e. all positions should have value True.
falsify :: InfBitMap -> [Int] -> InfBitMap
Set all positions in the list to False. The list is possible infinite. For example, falsify true [0,2..] will return a list where all (and only) odd positions are True.
check :: InfBitMap -> Int -> Bool
Check the value of the index.
Here is what I could do so far.
-- InfBitMap will look like [(#), (#, #), (#, #, #, #)..]
type InfBitMap = [Seq Bool]
true :: InfBitMap
true = iterate (\x -> x >< x) $ singleton True
-- O(L * log N) where N is the biggest index in the list checked for later
-- and L is the length of the index list. It is assumed that the list is
-- sorted and unique.
falsify :: InfBitMap -> [Int] -> InfBitMap
falsify ls is = map (falsify' is) ls
where
-- Update each sequence with all indices within its length
-- Basically composes a list of (update pos False) for all positions
-- within the length of the sequence and then applies it.
falsify' is l = foldl' (.) id
(map ((flip update) False)
(takeWhile (< length l) is))
$ l
-- O(log N) where N is the index.
check :: InfBitMap -> Int -> Bool
check ls i = index (fromJust $ find ((> i) . length) ls) i
I am wondering if there is some Haskellish concept/data-structure that I am missing that would make my code more elegant / more efficient (constants do not matter to me, just order). I tried looking at Zippers and Lenses but they do not seem to help. I would like to keep the complexities of updates and checks logarithmic (maybe just amortized logarithmic).
Note: before someone suspects it, no this is not a homework problem!
Update:
It just occurred to me that check can be improved to:
-- O(log N) where N is the index.
-- Returns "collapsed" bitmap for later more efficient checks.
check :: InfBitMap -> Int -> (Bool, InfBitMap)
check ls i = (index l i, ls')
where
ls'#(l:_) = dropWhile ((<= i) . length) ls
Which can be turned into a Monad for code cleanliness.
A slight variation on the well-known integer trie seems to be applicable here.
{-# LANGUAGE DeriveFunctor #-}
data Trie a = Trie a (Trie a) (Trie a) deriving (Functor)
true :: Trie Bool
true = Trie True true true
-- O(log(index))
check :: Trie a -> Int -> a
check t i | i < 0 = error "negative index"
check t i = go t (i + 1) where
go (Trie a _ _) 1 = a
go (Trie _ l r) i = go (if even i then l else r) (div i 2)
--O(log(index))
modify :: Trie a -> Int -> (a -> a) -> Trie a
modify t i f | i < 0 = error "negative index"
modify t i f = go t (i + 1) where
go (Trie a l r) 1 = Trie (f a) l r
go (Trie a l r) i | even i = Trie a (go l (div i 2)) r
go (Trie a l r) i = Trie a l (go r (div i 2))
Unfortunately we can't use modify to implement falsify because we can't handle infinite lists of indices that way (all modifications have to be performed before an element of the trie can be inspected). Instead, we should do something more like a merge:
ascIndexModify :: Trie a -> [(Int, a -> a)] -> Trie a
ascIndexModify t is = go 1 t is where
go _ t [] = t
go i t#(Trie a l r) ((i', f):is) = case compare i (i' + 1) of
LT -> Trie a (go (2*i) l ((i', f):is)) (go (2*i+1) r ((i', f):is))
GT -> go i t is
EQ -> Trie (f a) (go (2*i) l is) (go (2*i+1) r is)
falsify :: Trie Bool -> [Int] -> Trie Bool
falsify t is = ascIndexModify t [(i, const False) | i <- is]
We assume strictly ascending indices in is, since otherwise we would skip places in the trie or even get non-termination, for example in check (falsify t (repeat 0)) 1.
The time complexities are a bit complicated by laziness. In check (falsify t is) index, we pay an additional cost of a constant log 2 index number of comparisons, and a further length (filter (<index) is) number of comparisons (i. e. the cost of stepping over all indices smaller than what we're looking up). You could say it's O(max(log(index), length(filter (<index) is)). Anyway, it's definitely better than the O(length is * log (index)) that we would get for a falsify implemented for finite is-es using modify.
We must keep in mind that tree nodes are evaluated once, and subsequent check-s for the same index after the first check are not paying any extra cost for falsify. Again, laziness makes this a bit complicated.
This falsify is also pretty well-behaved when we want to traverse a prefix of a trie. Take this toList function:
trieToList :: Trie a -> [a]
trieToList t = go [t] where
go ts = [a | Trie a _ _ <- ts]
++ go (do {Trie _ l r <- ts; [l, r]})
It's a standard breadth-first traversal, in linear time. The traversal time remains linear when we compute take n $ trieToList (falsify t is), since falsify incurs at most n + length (filter (<n) is) extra comparisons, which is at most 2 * n, assuming strictly increasing is.
(side note: the space requirement of breadth-first traversal is rather painful, but I can't see a simple way to help it, since iterative deepening is even worse here, because there the whole tree must be held in memory, while bfs only has to remember the bottom level of the tree).
One way to represent this is as a function.
true = const True
falsify ls is = \i -> not (i `elem` is) && ls i
check ls i = ls i
The true and falsify functions are nice and efficient. The check function can be as bad as linear. It's possible to improve the efficiency of the same basic idea. I like its elegance.

Is there an indexed list in Haskell and is it good or bad?

I am a new comer to the Haskell world and I am wondering if there is something like this:
data IndexedList a = IList Int [a]
findIndex::(Int->Int)->IndexedList a->(a,IndexedList a)
findIndex f (IList x l) = (l!!(f x), IList (f x) l)
next::IndexedList a->(a,IndexedList a)
next x = findIndex (+1) x
I've noticed that this kind of list is not purely functional but kind of useful for some applications. Should it be considered harmful?
Thanks,
Bob
It's certainly useful to have a list that comes equipped with a pointed to a particular location in the list. However, the way it's usually done in Haskell is somewhat different - rather than using an explicit pointer, we tend to use a zipper.
The list zipper looks like this
data ListZipper a = LZ [a] a [a] deriving (Show)
You should think of the middle field a as being the element that is currently pointed to, the first field [a] as being the elements before the current position, and the final field [a] as being the elements after the current position.
Usually we store the elements before the current one in reverse order, for efficiency, so that the list [0, 1, 2, *3*, 4, 5, 6] with a pointer to the middle element, would be stored as
LZ [2,1,0] 3 [4,5,6]
You can define functions that move the pointer to the left or right
left (LZ (a:as) b bs) = LZ as a (b:bs)
right (LZ as a (b:bs)) = LZ (a:as) b bs
If you want to move to the left or right n times, then you can do that with the help of a function that takes another function, and applies it n times to its argument
times n f = (!!n) . iterate f
so that to move left three times, you could use
>> let lz = LZ [2,1,0] 3 [4,5,6]
>> (3 `times` left) lz
LZ [] 0 [1,2,3,4,5,6]
Your two functions findIndex and next can be written as
next :: ListZipper a -> (a, ListZipper a)
next = findIndex 1
findIndex :: Int -> ListZipper a -> (a, ListZipper a)
findIndex n x = let y#(LZ _ a _) = (n `times` right) x in (a, y)
Contrary to what you think this list is in fact purely functional. The reason is that IList (f x) l creates a new list (and does not, as you may think, modify the current IndexedList). It is in general not that easy to create non-purely functional data structures or functions in Haskell, as long as you stay away from unsafePerformIO.
The reason I would recommend against using the IndexedList is that there is no assurance that the index is less than the length of the list. In this case the lookup l!!(f x) will fail with an exception, which is generally considered bad style in Haskell. An alternative could be to use a safe lookup, which returns a Maybe a like the following:
findIndex :: (Int -> Int) -> IndexedList a -> (Maybe a, IndexedList a)
findIndex f (IList i l) = (maybe_x, IList new_i l)
where
new_i = f i
maybe_x = if new_i < length l
then Just (l !! newI)
else Nothing
I can also not think of a usecase where such a list would be useful, but I guess I am limited by my creativity ;)

Resources