Looking for a generic `bisect` function - haskell

I am looking for a bisect operation in Haskell similar to Python's bisect_left() and friends. The input would be a lower bound, an upper bound, a non-decreasing function (Ord a)=>(Int->a) which must be defined for all integers between the lower and upper bound, and a search value. The return value is the highest integer i where lower <= i <= upper and f(i) < search_term. Performance should be O(log(n)).
Hoogling for this:
(Ord a)=>(Int->a)->Int->Int->a->Int
does not yield any results.
Is there a standard, generic binary search operator in a library somewhere?

Ross Paterson's binary-search package on Hackage does what you're looking for. Specifically, see searchFromTo, which has type signature
searchFromTo :: Integral a => (a -> Bool) -> a -> a -> Maybe a
As Tikhon points out, [a] in Haskell is a linked list rather than an array. Since linked lists only support sequential access, it is not possible to get a logarithmic-time search on an [a] data structure. Instead, you should use a genuine array data structure -- see the vector library for the preferred implementation of arrays.
Dan Doel has written a family of binary search functions for the mutable vectors in the vector package: see Data.Vector.Algorithms.Search in his vector-algorithms library. In contrast to Ross Paterson's library, which provides a pure API, the API in Data.Vector.Algorithms.Search is monadic (that is, it must be run in the ST monad or the IO monad).

A function like bisect_left (assuming I read its documentation correctly) cannot really exists for lists.
This makes sense--since you don't have random access in O(1) in lists (remember that Haskell lists are linked lists, while Python uses something more like a vector), you could not really get an O(logn) binary search.
Particularly, just getting to the middle of the list takes O(n/2) (which is just O(n)) steps, so an algorithm that involved the middle of the list (like binary search) would have to be in Ω(n).
In short--binary search does not make sense on lists. If you're doing a lot of searching, you probably want a different data structure. Particularly, take a look at Data.Set which uses binary trees internally.

binary_search :: Ord a, Integral b => (b -> a) -> b -> b -> a -> b
binary_search f low hi a = go low hi
where
go low hi | low + 1 == hi = low
go low hi = go low' hi'
where
mid = (low + hi) `div` 2
(low',hi') = if f mid < a then (mid,hi) else (low, mid)
(This may have an off-by-one error.)

Related

How do we overcome the compile time and runtime gap when programming in a Dependently Typed Language?

I'm told that in dependent type system, "types" and "values" is mixed, and we can treat both of them as "terms" instead.
But there is something I can't understand: in a strongly typed programming language without Dependent Type (like Haskell), Types is decided (infered or checked) at compile time, but values is decided (computed or inputed) at runtime.
I think there must be a gap between these two stages. Just think that if a value is interactively read from STDIN, how can we reference this value in a type which must be decided AOT?
e.g. There is a natural number n and a list of natural number xs (which contains n elements) which I need to read from STDIN, how can I put them into a data structure Vect n Nat?
Suppose we input n :: Int at runtime from STDIN. We then read n strings, and store them into vn :: Vect n String (pretend for the moment this can be done).
Similarly, we can read m :: Int and vm :: Vect m String. Finally, we concatenate the two vectors: vn ++ vm (simplifying a bit here). This can be type checked, and will have type Vect (n+m) String.
Now it is true that the type checker runs at compile time, before the values n,m are known, and also before vn,vm are known. But this does not matter: we can still reason symbolically on the unknowns n,m and argue that vn ++ vm has that type, involving n+m, even if we do not yet know what n+m actually is.
It is not that different from doing math, where we manipulate symbolic expressions involving unknown variables according to some rules, even if we do not know the values of the variables. We don't need to know what number is n to see that n+n = 2*n.
Similarly, the type checker can type check
-- pseudocode
readNStrings :: (n :: Int) -> IO (Vect n String)
readNStrings O = return Vect.empty
readNStrings (S p) = do
s <- getLine
vp <- readNStrings p
return (Vect.cons s vp)
(Well, actually some more help from the programmer could be needed to typecheck this, since it involves dependent matching and recursion. But I'll neglect this.)
Importantly, the type checker can check that without knowing what n is.
Note that the same issue actually already arises with polymorphic functions.
fst :: forall a b. (a, b) -> a
fst (x, y) = x
test1 = fst # Int # Float (2, 3.5)
test2 = fst # String # Bool ("hi!", True)
...
One might wonder "how can the typechecker check fst without knowing what types a and b will be at runtime?". Again, by reasoning symbolically.
With type arguments this is arguably more obvious since we usually run the programs after type erasure, unlike value parameters like our n :: Int above, which can not be erased. Still, there is some similarity between universally quantifying over types or over Int.
It seems to me that there are two questions here:
Given that some values are unknown during compile-time (e.g., values read from STDIN), how can we make use of them in types? (Note that chi has already given an excellent answer to this.)
Some operations (e.g., getLine) seem to make absolutely no sense at compile-time; how could we possibly talk about them in types?
The answer to (1), as chi has said, is symbolic or abstract reasoning. You can read in a number n, and then have a procedure that builds a Vect n Nat by reading from the command line n times, making use of arithmetic properties such as the fact that 1+(n-1) = n for nonzero natural numbers.
The answer to (2) is a bit more subtle. Naively, you might want to say "this function returns a vector of length n, where n is read from the command line". There are two types you might try to give this (apologies if I'm getting Haskell notation wrong)
unsafePerformIO (do n <- getLine; return (IO (Vect (read n :: Int) Nat)))
or (in pseudo-Coq notation, since I'm not sure what Haskell's notation for existential types is)
IO (exists n, Vect n Nat)
These two types can actually both be made sense of, and say different things. The first type, to me, says "at compile time, read n from the command line, and return a function which, at runtime, gives a vector of length n by performing IO". The second type says "at runtime, perform IO to get a natural number n and a vector of length n".
The way I like looking at this is that all side effects (other than, perhaps, non-termination) are monad transformers, and there is only one monad: the "real-world" monad. Monad transformers work just as well at the type level as at the term level; the one thing which is special is run :: M a -> a which executes the monad (or stack of monad transformers) in the "real world". There are two points in time at which you can invoke run: one is at compile time, where you invoke any instance of run which shows up at the type level. Another is at runtime, where you invoke any instance of run which shows up at the value level. Note that run only makes sense if you specify an evaluation order; if your language does not specify whether it is call-by-value or call-by-name (or call-by-push-value or call-by-need or call-by-something-else), you can get incoherencies when you try to compute a type.

Apply function to all pairs efficiently

I need a second order function pairApply that applies a binary function f to all unique pairs of a list-like structure and then combines them somehow. An example / sketch:
pairApply (+) f [a, b, c] = f a b + f a c + f b c
Some research leads me to believe that Data.Vector.Unboxed probably will have good performance (I will also need fast access to specific elements); also it necessary for Statistics.Sample, which would come in handy further down the line.
With this in mind I have the following, which almost compiles:
import qualified Data.Vector.Unboxed as U      
pairElement :: (U.Unbox a, U.Unbox b)    
=> (U.Vector a)                    
  -> (a -> a -> b)                   
  -> Int                             
-> a                               
 -> (U.Vector b)                    
pairElement v f idx el =
U.map (f el) $ U.drop (idx + 1) v            
pairUp :: (U.Unbox a, U.Unbox b)   
=> (a -> a -> b)                        
 -> (U.Vector a)                         
-> (U.Vector (U.Vector b))
pairUp f v = U.imap (pairElement v f) v 
pairApply :: (U.Unbox a, U.Unbox b)
=> (b -> b -> b)                     
-> b                                 
 -> (a -> a -> b)                     
-> (U.Vector a)                      
 -> b
pairApply combine neutral f v =
folder $ U.map folder (pairUp f v) where
folder = U.foldl combine neutral
The reason this doesn't compile is that there is no Unboxed instance of a U.Vector (U.Vector a)). I have been able to create new unboxed instances in other cases using Data.Vector.Unboxed.Deriving, but I'm not sure it would be so easy in this case (transform it to a tuple pair where the first element is all the inner vectors concatenated and the second is the length of the vectors, to know how to unpack?)
My question can be stated in two parts:
Does the above implementation make sense at all or is there some quick library function magic etc that could do it much easier?
If so, is there a better way to make an unboxed vector of vectors than the one sketched above?
Note that I'm aware that foldl is probably not the best choice; once I've got the implementation sorted I plan to benchmark with a few different folds.
There is no way to define a classical instance for Unbox (U.Vector b), because that would require preallocating a memory area in which each element (i.e. each subvector!) has the same fixed amount of space. But in general, each of them may be arbitrarily big, so that's not feasible at all.
It might in principle be possible to define that instance by storing only a flattened form of the nested vector plus an extra array of indices (where each subvector starts). I once briefly gave this a try; it actually seems somewhat promising as far as immutable vectors are concerned, but a G.Vector instance also requires a mutable implementation, and that's hopeless for such an approach (because any mutation that changes the number of elements in one subvector would require shifting everything behind it).
Usually, it's just not worth it, because if the individual element vectors aren't very small the overhead of boxing them won't matter, i.e. often it makes sense to use B.Vector (U.Vector b).
For your application however, I would not do that at all – there's no need to ever wrap the upper element-choices in a single triangular array. (And it would be really bad for performance to do that, because it make the algorithm take O (n²) memory rather than O (n) which is all that's needed.)
I would just do the following:
pairApply combine neutral f v
= U.ifoldl' (\acc i p -> U.foldl' (\acc' q -> combine acc' $ f p q)
acc
(U.drop (i+1) v) )
neutral v
This corresponds pretty much to the obvious nested-loops imperative implementation
pairApply(combine, b, f, v):
for(i in 0..length(v)-1):
for(j in i+1..length(v)-1):
b = combine(b, f(v[i], v[j]);
return b;
My answer is basically the same as leftaroundabout's nested-loops imperative implementation:
pairApply :: (Int -> Int -> Int) -> Vector Int -> Int
pairApply f v = foldl' (+) 0 [f (v ! i) (v ! j) | i <- [0..(n-1)], j <- [(i+1)..(n-1)]]
where n = length v
As far as I know, I do not see any performance issue with this implementation.
Non-polymorphic for simplicity.

Infinite (or lazy) sampling of a probability distribution

I would like to have a function that returns a lazy infinite list of integers following a poisson distribution, with a similar type signature to this one:
samplePoisson :: MonadRandom m -> Double -> m [Int]
where the Double argument is the lambda parameter of the poisson distribution.
I have looked into some libraries (statistics, random-fu) but none of them provide what i want.
Edit:
Yes, both libraries provide functions for generating single values that follow a distribution, which can be easily extended to generate a finite list by using replicateM.
Why do I want an infinite list? Because I don't know a priori the number of elements that must be generated.
Let me show an example.
Imagine I have a function that returns the position of the first 1 in a list.
findFirst1 :: [Int] -> Int
If I had the samplePoisson function for generating a lazy list, then I could write:
findFirst1 <$> samplePoisson 3.0
However, if I am not able to generate a lazy list, then I would have to write a new function with a type signature similar to this one (using the random-fu library):
findFirst1M :: RVar Int -> RVar Int
Hence, I lose the function findFirst1 that I originally had.
I hope I made myself clear this time.

Counting number of elements in a list that satisfy the given predicate

Does Haskell standard library have a function that given a list and a predicate, returns the number of elements satisfying that predicate? Something like with type (a -> Bool) -> [a] -> Int. My hoogle search didn't return anything interesting. Currently I am using length . filter pred, which I don't find to be a particularly elegant solution. My use case seems to be common enough to have a better library solution that that. Is that the case or is my premonition wrong?
The length . filter p implementation isn't nearly as bad as you suggest. In particular, it has only constant overhead in memory and speed, so yeah.
For things that use stream fusion, like the vector package, length . filter p will actually be optimized so as to avoid creating an intermediate vector. Lists, however, use what's called foldr/build fusion at the moment, which is not quite smart enough to optimize length . filter p without creating linearly large thunks that risk stack overflows.
For details on stream fusion, see this paper. As I understand it, the reason that stream fusion is not currently used in the main Haskell libraries is that (as described in the paper) about 5% of programs perform dramatically worse when implemented on top of stream-based libraries, while foldr/build optimizations can never (AFAIK) make performance actively worse.
No, there is no predefined function that does this, but I would say that length . filter pred is, in fact, an elegant implementation; it's as close as you can get to expressing what you mean without just invoking the concept directly, which you can't do if you're defining it.
The only alternatives would be a recursive function or a fold, which IMO would be less elegant, but if you really want to:
foo :: (a -> Bool) -> [a] -> Int
foo p = foldl' (\n x -> if p x then n+1 else n) 0
This is basically just inlining length into the definition. As for naming, I would suggest count (or perhaps countBy, since count is a reasonable variable name).
Haskell is a high-level language. Rather than provide one function for every possible combination of circumstances you might ever encounter, it provides you with a smallish set of functions that cover all of the basics, and you then glue these together as required to solve whatever problem is currently at hand.
In terms of simplicity and conciseness, this is as elegant as it gets. So yes, length . filter pred is absolutely the standard solution. As another example, consider elem, which (as you may know) tells you whether a given item is present in a list. The standard reference implementation for this is actually
elem :: Eq x => x -> [x] -> Bool
elem x = foldr (||) False . map (x ==)
In order words, compare every element in the list to the target element, creating a new list of Bools. Then fold the logical-OR function over this new list.
If this seems inefficient, try not to worry about it. In particular,
The compiler can often optimise away temporary data structures created by code like this. (Remember, this is the standard way to write code in Haskell, so the compiler is tuned to deal with it.)
Even if it can't be optimised away, laziness often makes such code fairly efficient anyway.
(In this specific example, the OR function will terminate the loop as soon as a match is seen - just like what would happen if you hand-coded it yourself.)
As a general rule, write code by gluing together pre-existing functions. Change this only if performance isn't good enough.
This is my amateurish solution to a similar problem. Count the number of negative integers in a list l
nOfNeg l = length(filter (<0) l)
main = print(nOfNeg [0,-1,-2,1,2,3,4] ) --2
No, there isn't!
As of 2020, there is indeed no such idiom in the Haskell standard library yet! One could (and should) however insert an idiom howMany (resembling good old any)
howMany p xs = sum [ 1 | x <- xs, p x ]
-- howMany=(length.).filter
main = print $ howMany (/=0) [0..9]
Try howMany=(length.).filter
I'd do manually
howmany :: (a -> Bool) -> [a] -> Int
howmany _ [ ] = 0
howmany pred (x:xs) = if pred x then 1 + howmany pred xs
else howmany pred xs

Typed FP: Tuple Arguments and Curriable Arguments

In statically typed functional programming languages, like Standard ML, F#, OCaml and Haskell, a function will usually be written with the parameters separated from each other and from the function name simply by whitespace:
let add a b =
a + b
The type here being "int -> (int -> int)", i.e. a function that takes an int and returns a function which its turn takes and int and which finally returns an int. Thus currying becomes possible.
It's also possible to define a similar function that takes a tuple as an argument:
let add(a, b) =
a + b
The type becomes "(int * int) -> int" in this case.
From the point of view of language design, is there any reason why one could not simply identify these two type patterns in the type algebra? In other words, so that "(a * b) -> c" reduces to "a -> (b -> c)", allowing both variants to be curried with equal ease.
I assume this question must have cropped up when languages like the four I mentioned were designed. So does anyone know any reason or research indicating why all four of these languages chose not to "unify" these two type patterns?
I think the consensus today is to handle multiple arguments by currying (the a -> b -> c form) and to reserve tuples for when you genuinely want values of tuple type (in lists and so on). This consensus is respected by every statically typed functional language since Standard ML, which (purely as a matter of convention) uses tuples for functions that take multiple arguments.
Why is this so? Standard ML is the oldest of these languages, and when people were first writing ML compilers, it was not known how to handle curried arguments efficiently. (At the root of the problem is the fact that any curried function could be partially applied by some other code you haven't seen yet, and you have to compile it with that possibility in mind.) Since Standard ML was designed, compilers have improved, and you can read about the state of the art in an excellent paper by Simon Marlow and Simon Peyton Jones.
LISP, which is the oldest functional language of them all, has a concrete syntax which is extremely unfriendly to currying and partial application. Hrmph.
At least one reason not to conflate curried and uncurried functional types is that it would be very confusing when tuples are used as returned values (a convenient way in these typed languages to return multiple values). In the conflated type system, how can function remain nicely composable? Would a -> (a * a) also transform to a -> a -> a? If so, are (a * a) -> a and a -> (a * a) the same type? If not, how would you compose a -> (a * a) with (a * a) -> a?
You propose an interesting solution to the composition issue. However, I don't find it satisfying because it doesn't mesh well with partial application, which is really a key convenience of curried functions. Let me attempt to illustrate the problem in Haskell:
apply f a b = f a b
vecSum (a1,a2) (b1,b2) = (a1+b1,a2+b2)
Now, perhaps your solution could understand map (vecSum (1,1)) [(0,1)], but what about the equivalent map (apply vecSum (1,1)) [(0,1)]? It becomes complicated! Does your fullest unpacking mean that the (1,1) is unpacked with apply's a & b arguments? That's not the intent... and in any case, reasoning becomes complicated.
In short, I think it would be very hard to conflate curried and uncurried functions while (1) preserving the semantics of code valid under the old system and (2) providing a reasonable intuition and algorithm for the conflated system. It's an interesting problem, though.
Possibly because it is useful to have a tuple as a separate type, and it is good to keep different types separate?
In Haskell at least, it is quite easy to go from one form to the other:
Prelude> let add1 a b = a+b
Prelude> let add2 (a,b) = a+b
Prelude> :t (uncurry add1)
(uncurry add1) :: (Num a) => (a, a) -> a
Prelude> :t (curry add2)
(curry add2) :: (Num a) => a -> a -> a
so uncurry add1 is same as add2 and curry add2 is the same as add1. I guess similar things are possible in the other languages as well. What greater gains would there be to automatically currying every function defined on a tuple? (Since that's what you seem to be asking.)
Expanding on the comments under namin's good answer:
So assume a function which has type 'a -> ('a * 'a):
let gimme_tuple(a : int) =
(a*2, a*3)
Then assume a function which has type ('a * 'a) -> 'b:
let add(a : int, b) =
a + b
Then composition (assuming the conflation that I propose) wouldn't pose any problem as far as I can see:
let foo = add(gimme_tuple(5))
// foo gets value 5*2 + 5*3 = 25
But then you could conceive of a polymorphic function that takes the place of add in the last code snippet, for instance a little function that just takes a 2-tuple and makes a list of the two elements:
let gimme_list(a, b) =
[a, b]
This would have the type ('a * 'a) -> ('a list). Composition now would be problematic. Consider:
let bar = gimme_list(gimme_tuple(5))
Would bar now have the value [10, 15] : int list or would bar be a function of type (int * int) -> ((int * int) list), which would eventually return a list whose head would be the tuple (10, 15)? For this to work, I posited in a comment to namin's answer that one would need an additional rule in the type system that the binding of actual to formal parameters be the "fullest possible", i.e. that the system should attempt a non-partial binding first and only try a partial binding if a full binding is unattainable. In our example, it would mean that we get the value [10, 15] because a full binding is possible in this case.
Is such a concept of "fullest possible" inherently meaningless? I don't know, but I can't immediately see a reason it would be.
One problem is of course if you want the second interpretation of the last code snippet, then you'd need to go jump through an extra hoop (typically an anonymous function) to get that.
Tuples are not present in these languages simply to be used as function parameters. They are a really convenient way to represent structured data, e.g., a 2D point (int * int), a list element ('a * 'a list), or a tree node ('a * 'a tree * 'a tree). Remember also that structures are just syntactic sugar for tuples. I.e.,
type info =
{ name : string;
age : int;
address : string; }
is a convenient way of handling a (string * int * string) tuple.
There's no fundamental reason you need tuples in a programming language (you can build up tuples in the lambda calculus, just as you can booleans and integers—indeed using currying*), but they are convenient and useful.
* E.g.,
tuple a b = λf.f a b
fst x = x (λa.λb.a)
snd x = x (λa.λb.b)
curry f = λa.λb.f (λg.g a b)
uncurry f = λx.x f

Resources