How to wrap unsafe FFI? (Haskell) - haskell

This is a followup question to Is there ever a good reason to use unsafePerformIO?
So we know that
p_sin(double *p) { return sin(*p); }
is unsafe, and cannot be used with unsafePerformIO.
But the p_sin function is still a mathematical function, the fact that it was implemented in an unsafe way is an implementation detail. We don't exactly want, say, matrix multiplication to be in IO just because it involves allocating temporary memory.
How can we wrap this function in a safe way? Do we need to lock, allocate memory ourselves, etc? Is there a guide/tutorial for dealing with this?

Actually, if you incorporate the way p_sin is unsafe from that answer, it depends on p_sin not being a mathematical function, at least not one from numbers to numbers -- it depends on giving different answers when the memory the same pointer points to is different. So, mathematically speaking, there is something different between the two calls; with a formal model of pointers we might be able to tell. E.g.
type Ptr = Int
type Heap = [Double]
p_sin :: Heap -> Ptr -> Double
and then the C function would be equivalent to
p_sin h p = sin (h !! p)
The reason the results would differ is because of a different Heap argument, which is unnamed but implicit in the C definition.
If p_sin used temporary memory internally, but did not depend on the state of memory through its interface, e.g.
double p_sin(double x) {
double* y = (double*)malloc(sizeof(double));
*y = sin(x);
x = *y;
free(y);
return x;
}
then we do have an actual mathematical function Double -> Double, and we can
foreign import ccall safe "p_sin"
p_sin :: Double -> Double
and we're be fine. Pointers in the interface are killing the purity here, not C functions.
More practically, let's say you have a C matrix multiplication function implemented with pointers, since that's how you model arrays in C. In this case you'd probably expand the abstraction boundary, so there would be a few unsafe things going on in your program, but they would all be hidden from the module user. In this case, I recommend annotating everything unsafe with IO in your implementation, and then unsafePerformIOing right before you give it to the module user. This minimizes the surface area of impurity.
module Matrix
-- only export things guaranteed to interact together purely
(Matrix, makeMatrix, multMatrix)
where
newtype Matrix = Matrix (Ptr Double)
makeMatrix :: [[Double]] -> Matrix
makeMatrix = unsafePerformIO $ ...
foreign import ccall safe "multMatrix"
multMatrix_ :: Ptr Double -> IO (Ptr Double)
multMatrix :: Matrix -> Matrix
multMatrix (Matrix p) = unsafePerformIO $ multMatrix_ p
etc.

Related

Does Haskell re-evaluate a value when recalling a function depending on this value?

Assume I have
f :: Int -> MyType
f i = ......
voxel :: MyType
voxel = f 2
function1 :: Double -> MyType -> MyOtherType
function1 x w = ......
function2 :: Double -> MyOtherType
function2 y = function1 y voxel
Now, assume I call function2 twice, for example function2 1.0 then function2 2.0 (I am in the context of OpenGL and MyOtherType is a graphical element to be plotted, and the user can change y with the keyboard).
Then, at the second call of function2, does Haskell re-evaluate voxel?
If I had such a situation in, say for example the C language, I would put a printf in voxel to know the answer, but I can't do such a thing with a pure Haskell function (can I?).
Haskell doesn't say what happens.
GHC does not re-evaluate voxel; generally, let- and where-bound values are evaluated at most once. (Though keep in mind that class-polymorphic values behave like functions, where choosing an instance is function application, and these function calls are not memoized, so apparently benign class-polymorphic bindings will probably lead to many reevaluations.)
You can put a "printf" with Debug.Trace.trace, but this should be used for educational and debugging purposes only.
import Debug.Trace
voxel = trace "evaluated voxel" $ f 2
It might, it might not, it depends on usage. Once it's evaluated, it'll stay evaluated until it gets garbage collected. As long as you have a reference to it somewhere, it can't be garbage collected.
Why would GHC allow a top level "constant" to be garbage collected? For a contrived example, imagine I have something like
nats :: [Integer]
nats = [0..]
and then another function which indexed into nats. If GHC couldn't collect nats, it'd have to store the list [1..n] (where n was the value that was indexed) even though I'm not using most of the list.

Do newtypes incur no cost even when you cannot pattern-match on them?

Context
Most Haskell tutorials I know (e.g. LYAH) introduce newtypes as a cost-free idiom that allows enforcing more type safety. For instance, this code will type-check:
type Speed = Double
type Length = Double
computeTime :: Speed -> Length -> Double
computeTime v l = l / v
but this won't:
newtype Speed = Speed { getSpeed :: Double }
newtype Length = Length { getLength :: Double }
-- wrong!
computeTime :: Speed -> Length -> Double
computeTime v l = l / v
and this will:
-- right
computeTime :: Speed -> Length -> Double
computeTime (Speed v) (Length l) = l / v
In this particular example, the compiler knows that Speed is just a Double, so the pattern-matching is moot and will not generate any executable code.
Question
Are newtypes still cost-free when they appear as arguments of parametric types? For instance, consider a list of newtypes:
computeTimes :: [Speed] -> Length -> [Double]
computeTimes vs l = map (\v -> getSpeed v / l) vs
I could also pattern-match on speed in the lambda:
computeTimes' :: [Speed] -> Length -> [Double]
computeTimes' vs l = map (\(Speed v) -> v / l) vs
In either case, for some reason, I feel that real work is getting done! I start to feel even more uncomfortable when the newtype is buried within a deep tree of nested parametric datatypes, e.g. Map Speed [Set Speed]; in this situation, it may be difficult or impossible to pattern-match on the newtype, and one would have to resort to accessors like getSpeed.
TL;DR
Will the use of a newtype never ever incur a cost, even when the newtype appears as a (possibly deeply-buried) argument of another parametric type?
On their own, newtypes are cost-free. Applying their constructor, or pattern matching on them has zero cost.
When used as parameter for other types e.g. [T] the representation of [T] is precisely the same as the one for [T'] if T is a newtype for T'. So, there's no loss in performance.
However, there are two main caveats I can see.
newtypes and instances
First, newtype is frequently used to introduce new instances of type classes. Clearly, when these are user-defined, there's no guarantee that they have the same cost as the original instances. E.g., when using
newtype Op a = Op a
instance Ord a => Ord (Op a) where
compare (Op x) (Op y) = compare y x
comparing two Op Int will cost slightly more than comparing Int, since the arguments need to be swapped. (I am neglecting optimizations here, which might make this cost free when they trigger.)
newtypes used as type arguments
The second point is more subtle. Consider the following two implementations of the identity [Int] -> [Int]
id1, id2 :: [Int] -> [Int]
id1 xs = xs
id2 xs = map (\x->x) xs
The first one has constant cost. The second has a linear cost (assuming no optimization triggers). A smart programmer should prefer the first implementation, which is also simpler to write.
Suppose now we introduce newtypes on the argument type, only:
id1, id2 :: [Op Int] -> [Int]
id1 xs = xs -- error!
id2 xs = map (\(Op x)->x) xs
We can no longer use the constant cost implementation because of a type error. The linear cost implementation still works, and is the only option.
Now, this is quite bad. The input representation for [Op Int] is exactly, bit by bit, the same for [Int]. Yet, the type system forbids us to perform the identity in an efficient way!
To overcome this issue, safe coercions where introduced in Haskell.
id3 :: [Op Int] -> [Int]
id3 = coerce
The magic coerce function, under certain hypotheses, removes or inserts newtypes as needed to make type match, even inside other types, as for [Op Int] above. Further, it is a zero-cost function.
Note that coerce works only under certain conditions (the compiler checks for them). One of these is that the newtype constructor must be visible: if a module does not export Op :: a -> Op a you can not coerce Op Int to Int or vice versa. Indeed, if a module exports the type but not the constructor, it would be wrong to make the constructor accessible anyway through coerce. This makes the "smart constructors" idiom still safe: modules can still enforce complex invariants through opaque types.
It doesn't matter how deeply buried a newtype is in a stack of (fully) parametric types. At runtime, the values v :: Speed and w :: Double are completely indistinguishable – the wrapper is erased by the compiler, so even v is really just a pointer to a single 64-bit floating-point number in memory. Whether that pointer is stored in a list or tree or whatever doesn't make a difference either. getSpeed is a no-op and will not appear at runtime in any way at all.
So what do I mean by “fully parametric”? The thing is, newtypes can obviously make a difference at compile time, via the type system. In particular, they can guide instance resolution, so a newtype that invokes a different class method may certainly have worse (or, just as easily, better!) performance than the wrapped type. For example,
class Integral n => Fibonacci n where
fib :: n -> Integer
instance Fibonacci Int where
fib = (fibs !!)
where fibs = [ if i<2 then 1
else fib (i-2) + fib (i-1)
| i<-[0::Int ..] ]
this implementation is pretty slow, because it uses a lazy list (and performs lookups in it over and over again) for memoisation. On the other hand,
import qualified Data.Vector as Arr
-- | A number between 0 and 753
newtype SmallInt = SmallInt { getSmallInt :: Int }
instance Fibonacci SmallInt where
fib = (fibs Arr.!) . getSmallInt
where fibs = Arr.generate 754 $
\i -> if i<2 then 1
else fib (SmallInt $ i-2) + fib (SmallInt $ i-1)
This fib is much faster, because thanks to the input being limited to a small range, it is feasible to strictly allocate all of the results and store them in a fast O (1) lookup array, not needing the spine-laziness.
This of course applies again regardless of what structure you store the numbers in. But the different performance only comes about because different method instantiations are called – at runtime this means simply, completely different functions.
Now, a fully parametric type constructor must be able to store values of any type. In particular, it cannot impose any class restrictions on the contained data, and hence also not call any class methods. Therefore this kind of performance difference can not happen if you're just dealing with generic [a] lists or Map Int a maps. It can, however, occur when you're dealing with GADTs. In this case, even the actual memory layout might be completely differet, for instance with
{-# LANGUAGE GADTs #-}
import qualified Data.Vector as Arr
import qualified Data.Vector.Unboxed as UArr
data Array a where
BoxedArray :: Arr.Vector a -> Array a
UnboxArray :: UArr.Unbox a => UArr.Vector a -> Array a
might allow you to store Double values more efficiently than Speed values, because the former can be stored in a cache-optimised unboxed array. This is only possible because the UnboxArray constructor is not fully parametric.

Parallel calls to HMatrix (or FFI in general)

I am working with point clouds in haskell using the repa library (3 as well as 4). At least I am trying to.
There are a few operations I need to do massively where parallelism really helps a lot. Most of these are simple linear algebra operations on the (metric) neighborhood of a point. For example a principal component analysis where I need to compute a SVD on a small matrix where each row is a point.
Now I use the linear package for the vector type
type Vec3 = V3 Float
and a 1-dimensional array of these vectors for point clouds
type Cloud = Array F DIM1 Vec3
So now I have the problem of calling a function using matrix decompositions of hmatrix in parallel using computeP. I tried hmatrix itself as well as the repa-linear-algebra package for that. The problem I have is that with all these calls (no matter how I provide the data and no matter what I call (svd, eigen decompositions, qr decomp. etc.)) the application always crashes randomly with a bus error or segfault.
I also didn't find any way to get any stacktrace that would at least point me in the right direction. Stack traces usually end at pthread.
Additionally I wrote my own C code which I call like e.g.:
foreign import ccall safe "pca.hpp pca"
c_pca :: CUInt -> Ptr Float -> Ptr Float -> Ptr Float -> IO ()
{-# INLINE foreignPCA #-}
foreignPCA :: forall r . (Source r Vec3) => Array r DIM1 Vec3 -> ([Vec3], Vec3)
foreignPCA !vs = unsafePerformIO $ do
n <- return $ Arrays.length vs
ps <- mallocForeignPtrArray n :: IO (ForeignPtr Vec3) -- point matrix
computeIntoS ps (delay vs)
as <- mallocForeignPtrArray 3 :: IO (ForeignPtr Float) -- singular values
av <- mallocForeignPtrArray 3 :: IO (ForeignPtr Vec3) -- right singular vectors
withForeignPtr (castForeignPtr ps) $ \pps ->
withForeignPtr as $ \pas ->
withForeignPtr (castForeignPtr av) $ \pav -> do
c_pca (fromIntegral n) pps pas pav
svalues <- peekArray 3 pas :: IO [Float]
svecs <- peekArray 3 (castPtr pav :: Ptr Vec3) :: IO [Vec3]
let [sx, sy, sz] = svalues in
return (svecs, (V3 sx sy sz))
This works perfectly fine on a massive point cloud with 20 cores in parallel. Never crashed in any way.
Now my very vague idea is that hmatrix calls C/Fortran code with "safe" thereby allowing pthread forks and without actually being thread-safe.
I can't try to verify this assumption as debugging seems a foreign concept to the haskell tool chain (at least for a complete newbie that I am).
In conclusion I have three questions:
Is hmatrix known to have problems working in parallel
Is there anyone working on native implementations of these fundamental algorithms?
How do I prevent FFI wrapped code to spawn fork()'ed threads without having access to the import call?
How do I debug things like hmatrix?
The second one is of particular interest to me since I find hmatrix to be incredibly ugly (subhask looks very promising but is too incomplete to be feasible). My goal is to change to haskell, but if I have to use my own C++ code for any trivial matter like above, I can just keep coding in C++ as I do now...

What are hashes (#) used for in the library's source?

I was trying to figure out how mVars work, and I came across this bit of code:
-- |Create an 'MVar' which is initially empty.
newEmptyMVar :: IO (MVar a)
newEmptyMVar = IO $ \ s# ->
case newMVar# s# of
(# s2#, svar# #) -> (# s2#, MVar svar# #)
Besides being confusingly mutually recursive with newMVar, it's also littered with hashs (#).
Between the two, I can't figure out how it works. I know that this is basically just a pseudo-constructor for mVar, but the rest of the module (most of the library actually) contains them, and I can't find anything on them. Googling "Haskell hashs" didn't yield anything relevant.
They're (literally) magic hashes. They distinguish GHC's primitive's like addition, unboxed types, and unboxed tuples. You can enable writing them with
{-# LANGUAGE MagicHash #-}
Now you can import the stubs that let you use them with
import GHC.Exts
unboxed :: Int# -> Int# -> Int#
unboxed a# b# = a# +# b#
boxed :: Int -> Int -> Int
boxed (I# a#) (I# b#) = I# (unboxed a# b#)
This actually is kinda nifty when you think about it, by wrapping the magical and strict primitives like this, we can handle lazy Ints and Chars uniformly at the runtime system level.
Because primitives are not boxed, they're segregated at the kind level. This means that Int# doesn't have the kind * like normal types, which also means something like
kindClash :: Int# -> Int#
kindClash = id -- id expects boxed types
Won't compile.
To further elaborate on your code, newMVar includes a call to a compiler primitive in GHC to allocate a new mutable variable. It's not mutually recursive so much as a thin wrapper over a compiler call. There's also some darkness gathering at the corners of this function since we're treating IO as a perverse state monad, but let's not look to closely at that. I like my sanity too much.
I don't use primitives in everyday code, nor should you. They come up when implementing crazy optimized hotspots, or near primitive abstractions like what you're looking at.

Short-lived memoization in Haskell?

In an object-oriented language when I need to cache/memoize the results of a function for a known life-time I'll generally follow this pattern:
Create a new class
Add to the class a data member and a method for each function result I want to cache
Implement the method to first check to see if the result has been stored in the data member. If so, return that value; else call the function (with the appropriate arguments) and store the returned result in the data member.
Objects of this class will be initialized with values that are needed for the various function calls.
This object-based approach is very similar to the function-based memoization pattern described here: http://www.bardiak.com/2012/01/javascript-memoization-pattern.html
The main benefit of this approach is that the results are kept around only for the life time of the cache object. A common use case is in the processing of a list of work items. For each work item one creates the cache object for that item, processes the work item with that cache object then discards the work item and cache object before proceeding to the next work item.
What are good ways to implement short-lived memoization in Haskell? And does the answer depend on if the functions to be cached are pure or involve IO?
Just to reiterate - it would be nice to see solutions for functions which involve IO.
Let's use Luke Palmer's memoization library: Data.MemoCombinators
import qualified Data.MemoCombinators as Memo
import Data.Function (fix) -- we'll need this too
I'm going to define things slightly different from how his library does, but it's basically the same (and furthermore, compatible). A "memoizable" thing takes itself as input, and produces the "real" thing.
type Memoizable a = a -> a
A "memoizer" takes a function and produces the memoized version of it.
type Memoizer a b = (a -> b) -> a -> b
Let's write a little function to put these two things together. Given a Memoizable function and a Memoizer, we want the resultant memoized function.
runMemo :: Memoizer a b -> Memoizable (a -> b) -> a -> b
runMemo memo f = fix (f . memo)
This is a little magic using the fixpoint combinator (fix). Never mind that; you can google it if you are interested.
So let's write a Memoizable version of the classic fib example:
fib :: Memoizable (Integer -> Integer)
fib self = go
where go 0 = 1
go 1 = 1
go n = self (n-1) + self (n-2)
Using a self convention makes the code straightforward. Remember, self is what we expect to be the memoized version of this very function, so recursive calls should be on self. Now fire up ghci.
ghci> let fib' = runMemo Memo.integral fib
ghci> fib' 10000
WALL OF NUMBERS CRANKED OUT RIDICULOUSLY FAST
Now, the cool thing about runMemo is you can create more than one freshly memoized version of the same function, and they will not share memory banks. That means that I can write a function that locally creates and uses fib', but then as soon as fib' falls out of scope (or earlier, depending on the intelligence of the compiler), it can be garbage collected. It doesn't have to be memoized at the top level. This may or may not play nicely with memoization techniques that rely on unsafePerformIO. Data.MemoCombinators uses a pure, lazy Trie, which fits perfectly with runMemo. Rather than creating an object which essentially becomes a memoization manager, you can simply create memoized functions on demand. The catch is that if your function is recursive, it must be written as Memoizable. The good news is you can plug in any Memoizer that you wish. You could even use:
noMemo :: Memoizer a b
noMemo f = f
ghci> let fib' = runMemo noMemo fib
ghci> fib' 30 -- wait a while; it's computing stupidly
1346269
Lazy-Haskell programming is, in a way, the memoization paradigm taken to a extreme. Also, whatever you do in an imperative language is possible in Haskell, using either IO monad, the ST monad, monad transformers, arrows, or you name what.
The only problem is that these abstraction devices are much more complicated than the imperative equivalent that you mentioned, and they need a pretty deep mind-rewiring.
I believe the above answers are both more complex than necessary, although they might be more portable than what I'm about to describe.
As I understand it, there is a rule in ghc that each value is computed exactly once when it's enclosing lambda expression is entered. You may thus create exactly your short lived memoization object as follows.
import qualified Data.Vector as V
indexerVector :: (t -> Int) -> V.Vector t -> Int -> [t]
indexerVector idx vec = \e -> tbl ! e
where m = maximum $ map idx $ V.toList vec
tbl = V.accumulate (flip (:)) (V.replicate m [])
(V.map (\v -> (idx v, v)) vec)
What does this do? It groups all the elements in the Data.Vector t passed as it's second argument vec according to integer computed by it's first argument idx, retaining their grouping as a Data.Vector [t]. It returns a function of type Int -> [t] which looks up this grouping by this pre-computed index value.
Our compiler ghc has promised that tbl shall only be thunked once when we invoke indexerVector. We may therefore assign the lambda expression \e -> tbl ! e returned by indexVector to another value, which we may use repeatedly without fear that tbl ever gets recomputed. You may verify this by inserting a trace on tbl.
In short, your caching object is exactly this lambda expression.
I've found that almost anything you can accomplish with a short term object can be better accomplished by returning a lambda expression like this.
You can use very same pattern in haskell too. Lazy evaluation will take care of checking whether value is evaluated already. It has been mentioned mupltiple times already but code example could be useful. In example below memoedValue will calculated only once when it is demanded.
data Memoed = Memoed
{ value :: Int
, memoedValue :: Int
}
memo :: Int -> Memoed
memo i = Memoed
{ value = i
, memoedValue = expensiveComputation i
}
Even better you can memoize values which depend on other memoized values. You shoud avoid dependecy loops. They can lead to nontermination
data Memoed = Memoed
{ value :: Int
, memoedValue1 :: Int
, memoedValue2 :: Int
}
memo :: Int -> Memoed
memo i = r
where
r = Memoed
{ value = i
, memoedValue1 = expensiveComputation i
, memoedValue2 = anotherComputation (memoedValue1 r)
}

Resources