I'm trying to debug an issue with an implementation of a threshold encryption scheme. I've posted this question on crypto to get some help with the actual scheme but was hoping to get a sanity check on the simplified code I am using.
Essentially the the crypto system uses Shamir's Secret Sharing to combine the shares of a key. The polynomial is each member of the list 'a' multiplied by a increasing power of the parameter of the polynomial. I've left out the mod by prime to simplify the code as the actual implementation uses PBC via a Haskell wrapper.
I have for the polynomial
poly :: [Integer] -> Integer -> Integer
poly as xi = (f 1 as)
where
f _ [] = 0
f 0 _ = 0
f s (a:as) = (a * s) + f (s * xi) as
The Lagrange interpolation is:
interp0 :: [(Integer, Integer)] -> Integer
interp0 xys = round (sum $ zipWith (*) ys $ fmap (f xs) xs)
where
xs = map (fromIntegral .fst) xys
ys = map (fromIntegral .snd) xys
f :: (Eq a, Fractional a) => [a] -> a -> a
f xs xj = product $ map (p xj) xs
p :: (Eq a, Fractional a) => a -> a -> a
p xj xm = if xj == xm then 1 else negate (xm / (xj - xm))
and the split and combination code is
execPoly as#(a0:_) = do
let xs = zipWith (,) [0..] (fmap (poly as) [0..100])
let t = length as + 1
let offset = 1
let shares = take t (drop offset xs)
let sm2 = interp0 shares
putText ("poly and interp over " <> show as <> " = " <> show sm2 <> ". Should be " <> show a0)
main :: IO ()
main = do
execPoly [10,20,30,40,50,60,70,80,90,100,110,120,130,140,150] --1
execPoly [10,20,30,40,50,60,70,80] -- 2
execPoly(1) fails to combine to 10 but execPoly(2) combines correctly. The magic threshold seems to be 8.
Is my code correct? I am missing something in the implementation that limits the threshold size to 8?
As MathematicalOrchid said it was a precision problem.
Updated the code to:
f :: (Eq a, Integral a) => [a] -> a -> Ratio a
f xs xj = product $ map (p xj) xs
p :: (Eq a, Integral a)=> a -> a -> Ratio a
p xj xm = if xj == xm then (1 % 1) else (negate xm) % (xj - xm)
And it works as expected.
So I've only a little experience with Haskell and I've been working on the below program to implement a search to find maxima in a function, yet I've been receiving an odd error. When I compile it says:
MaximaSearch.hs:26:1:
parse error (possibly incorrect indentation or mismatched brackets)
That's the line that says "main = do" so I think it's some sort of trailing error from my indentation in the code preceding it but I can't spot any mistakes...
Here's the code:
module Main where
g :: Float -> Float
--your function here
g x = cos(x^2)
--goldenSectionSearch
goldenSS :: (Float -> Float) -> Float -> Float -> Float -> Float -> Float
goldenSS f a b c tau
| (c-a) < tau * (abs b + abs x) = (c+a)/2
|f x > f b = let
t1|(c - b) > (b-a) = goldenSS f b x c tau
|otherwise = goldenSS f a x b tau
in t1
|otherwise = let
t2|(c-b) > (b-a) = goldenSS f a b x tau
|otherwise = goldenSS f x b c tau
in t2
where
let x
| (c-b) > (b-a) = b + resphi*(c-b)
|otherwise = b - resphi*(b-a)
where resphi = 2 - phi where phi = (1+ sqrt 5)/2
in x
--main
main = do
print x
print (g x)
where
x = goldenSS g a ((a+b)/2) b tau
where
a = 2
b = 3
tau = 0.001
any ideas?
The reason you're getting the parse error stems from the non-idiomatic usage of let and where bindings in your code.
Haskell allows multiple syntactic structures for temporary bindings and pattern matching, but you're combining them in a rather strange and messy manner.
To learn how to write the code cleaner and in a manner more usual for Haskell, I'd recommend looking up existing haskell libraries and programs (for example on hackage) to get a feel for how let and where bindings usually work. In general I find that for pure functions I almost exclusively use where (as opposed to let), but certain things are stylistic.
As for this code, I modified it a bit to use where bindings instead of let, and it compiles and runs for me now. Even if you have to tweak it a bit to get it to compile for you, this overall structure is cleaner and less likely to give you parse errors:
module Main where
g :: Float -> Float
g x = cos(x^2)
goldenSS :: (Float -> Float) -> Float -> Float -> Float -> Float -> Float
goldenSS f a b c tau
|(c-a) < tau * (abs b + abs x) = (c+a)/2
|f x > f b = t1
|otherwise = t2
where x | (c-b) > (b-a) = b + resphi*(c-b)
|otherwise = b - resphi*(b-a)
resphi = 2 - phi
phi = (1+ sqrt 5)/2
t1 |(c - b) > (b-a) = goldenSS f b x c tau
|otherwise = goldenSS f a x b tau
t2 |(c-b) > (b-a) = goldenSS f a b x tau
|otherwise = goldenSS f x b c tau
main =
do
print x
print (g x)
where x = goldenSS g a ((a+b)/2) b tau
a = 2
b = 3
tau = 0.001
Mind the pure function below, in an imperative language:
def foo(x,y):
x = f(x) if a(x)
if c(x):
x = g(x)
else:
x = h(x)
x = f(x)
y = f(y) if a(y)
x = g(x) if b(y)
return [x,y]
That function represents a style where you have to incrementally update variables. It can be avoided in most cases, but there are situations where that pattern is unavoidable - for example, writing a cooking procedure for a robot, which inherently requires a series of steps and decisions. Now, imagine we were trying to represent foo in Haskell.
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
That code works, but it is too complicated and error prone due to the need for manually managing the numeric tags. Notice that, after x1 is set, x0's value should never be used again, but it still can. If you accidentally use it, that will be an undetected error.
I've managed to solve this problem using the State monad:
fooSt x y = execState (do
(x,y) <- get
when (a x) (put (f x, y))
(x,y) <- get
if c x
then put (g x, y)
else put (h x, y)
(x,y) <- get
put (f x, y)
(x,y) <- get
when (a y) (put (x, f y))
(x,y) <- get
when (b y) (put (g x, x))) (x,y)
This way, need for tag-tracking goes away, as well as the risk of accidentally using an outdated variable. But now the code is verbose and much harder to understand, mainly due to the repetition of (x,y) <- get.
So: what is a more readable, elegant and safe way to express this pattern?
Full code for testing.
Your goals
While the direct transformation of imperative code would usually lead to the ST monad and STRef, lets think about what you actually want to do:
You want to manipulate values conditionally.
You want to return that value.
You want to sequence the steps of your manipulation.
Requirements
Now this indeed looks first like the ST monad. However, if we follow the simple monad laws, together with do notation, we see that
do
x <- return $ if somePredicate x then g x
else h x
x <- return $ if someOtherPredicate x then a x
else b x
is exactly what you want. Since you need only the most basic functions of a monad (return and >>=), you can use the simplest:
The Identity monad
foo x y = runIdentity $ do
x <- return $ if a x then f x
else x
x <- return $ if c x then g x
else h x
x <- return $ f x
y <- return $ if a x then f y
else y
x <- return $ if b y then g x
else y
return (x,y)
Note that you cannot use let x = if a x then f x else x, because in this case the x would be the same on both sides, whereas
x <- return $ if a x then f x
else x
is the same as
(return $ if a x then (f x) else x) >>= \x -> ...
and the x in the if expression is clearly not the same as the resulting one, which is going to be used in the lambda on the right hand side.
Helpers
In order to make this more clear, you can add helpers like
condM :: Monad m => Bool -> a -> a -> m a
condM p a b = return $ if p then a else b
to get an even more concise version:
foo x y = runIdentity $ do
x <- condM (a x) (f x) x
x <- fmap f $ condM (c x) (g x) (h x)
y <- condM (a y) (f y) y
x <- condM (b y) (g x) x
return (x , y)
Ternary craziness
And while we're up to it, lets crank up the craziness and introduce a ternary operator:
(?) :: Bool -> (a, a) -> a
b ? ie = if b then fst ie else snd ie
(??) :: Monad m => Bool -> (a, a) -> m a
(??) p = return . (?) p
(#) :: a -> a -> (a, a)
(#) = (,)
infixr 2 ??
infixr 2 #
infixr 2 ?
foo x y = runIdentity $ do
x <- a x ?? f x # x
x <- fmap f $ c x ?? g x # h x
y <- a y ?? f y # y
x <- b y ?? g x # x
return (x , y)
But the bottomline is, that the Identity monad has everything you need for this task.
Imperative or non-imperative
One might argue whether this style is imperative. It's definitely a sequence of actions. But there's no state, unless you count the bound variables. However, then a pack of let … in … declarations also gives an implicit sequence: you expect the first let to bind first.
Using Identity is purely functional
Either way, the code above doesn't introduce mutability. x doesn't get modified, instead you have a new x or y shadowing the last one. This gets clear if you desugar the do expression as noted above:
foo x y = runIdentity $
a x ?? f x # x >>= \x ->
c x ?? g x # h x >>= \x ->
return (f x) >>= \x ->
a y ?? f y # y >>= \y ->
b y ?? g x # x >>= \x ->
return (x , y)
Getting rid of the simplest monad
However, if we would use (?) on the left hand side and remove the returns, we could replace (>>=) :: m a -> (a -> m b) -> m b) by something with type a -> (a -> b) -> b. This just happens to be flip ($). We end up with:
($>) :: a -> (a -> b) -> b
($>) = flip ($)
infixr 0 $> -- same infix as ($)
foo x y = a x ? f x # x $> \x ->
c x ? g x # h x $> \x ->
f x $> \x ->
a y ? f y # y $> \y ->
b y ? g x # x $> \x ->
(x, y)
This is very similar to the desugared do expression above. Note that any usage of Identity can be transformed into this style, and vice-versa.
The problem you state looks like a nice application for arrows:
import Control.Arrow
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' p f g x = if p x then f x else g x
foo2 :: (Int,Int) -> (Int,Int)
foo2 = first (if' c g h . if' a f id) >>>
first f >>>
second (if' a f id) >>>
(\(x,y) -> (if b y then g x else x , y))
in particular, first lifts a function a -> b to (a,c) -> (b,c), which is more idiomatic.
Edit: if' allows a lift
import Control.Applicative (liftA3)
-- a functional if for lifting
if'' b x y = if b then x else y
if' :: (a -> Bool) -> (a -> a) -> (a -> a) -> a -> a
if' = liftA3 if''
I'd probably do something like this:
foo x y = ( x', y' )
where x' = bgf y' . cgh . af $ x
y' = af y
af z = (if a z then f else id) z
cgh z = (if c z then g else h) z
bg y x = (if b y then g else id) x
For something more complicated, you may want to consider using lens:
whenM :: Monad m => m Bool -> m () -> m ()
whenM c a = c >>= \res -> when res a
ifM :: Monad m => m Bool -> m a -> m a -> m a
ifM mb ml mr = mb >>= \b -> if b then ml else mr
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
whenM (uses _1 a) $
_1 %= f
ifM (uses _1 c)
(_1 %= g)
(_1 %= h)
_1 %= f
whenM (uses _2 a) $
_2 %= f
whenM (uses _2 b) $ do
_1 %= g
And there's nothing stopping you from using more descriptive variable names:
foo :: Int -> Int -> (Int, Int)
foo = curry . execState $ do
let x :: Lens (a, c) (b, c) a b
x = _1
y :: Lens (c, a) (c, b) a b
y = _2
whenM (uses x a) $
x %= f
ifM (uses x c)
(x %= g)
(x %= h)
x %= f
whenM (uses y a) $
y %= f
whenM (uses y b) $ do
x %= g
This is a job for the ST (state transformer) library.
ST provides:
Stateful computations in the form of the ST type. These look like ST s a for a computation that results in a value of type a, and may be run with runST to obtain a pure a value.
First-class mutable references in the form of the STRef type. The newSTRef a action creates a new STRef s a reference with an initial value of a, and which can be read with readSTRef ref and written with writeSTRef ref a. A single ST computation can use any number of STRef references internally.
Together, these let you express the same mutable variable functionality as in your imperative example.
To use ST and STRef, we need to import:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Monad.ST.Safe
import Data.STRef
Instead of using the low-level readSTRef and writeSTRef all over the place, we can define the following helpers to match the imperative operations that the Python-style foo example uses:
-- STRef assignment.
(=:) :: STRef s a -> ST s a -> ST s ()
ref =: x = writeSTRef ref =<< x
-- STRef function application.
($:) :: (a -> b) -> STRef s a -> ST s b
f $: ref = f `fmap` readSTRef ref
-- Postfix guard syntax.
if_ :: Monad m => m () -> m Bool -> m ()
action `if_` guard = act' =<< guard
where act' b = if b then action
else return ()
This lets us write:
ref =: x to assign the value of ST computation x to the STRef ref.
(f $: ref) to apply a pure function f to the STRef ref.
action `if_` guard to execute action only if guard results in True.
With these helpers in place, we can faithfully translate the original imperative definition of foo into Haskell:
a = (< 10)
b = even
c = odd
f x = x + 3
g x = x * 2
h x = x - 1
f3 x = x + 2
-- A stateful computation that takes two integer STRefs and result in a final [x,y].
fooST :: Integral n => STRef s n -> STRef s n -> ST s [n]
fooST x y = do
x =: (f $: x) `if_` (a $: x)
x' <- readSTRef x
if c x' then
x =: (g $: x)
else
x =: (h $: x)
x =: (f $: x)
y =: (f $: y) `if_` (a $: y)
x =: (g $: x) `if_` (b $: y)
sequence [readSTRef x, readSTRef y]
-- Pure wrapper: simply call fooST with two fresh references, and run it.
foo :: Integral n => n -> n -> [n]
foo x y = runST $ do
x' <- newSTRef x
y' <- newSTRef y
fooST x' y'
-- This will print "[9,3]".
main = print (foo 0 0)
Points to note:
Although we first had to define some syntactical helpers (=:, $:, if_) before translating foo, this demonstrates how you can use ST and STRef as a foundation to grow your own little imperative language that's directly suited to the problem at hand.
Syntax aside, this matches the structure of the original imperative definition exactly, without any error-prone restructuring. Any minor changes to the original example can be mirrored directly to Haskell. (The addition of the temporary x' <- readSTRef x binding in the Haskell code is only in order to use it with the native if/else syntax: if desired, this can be replaced with an appropriate ST-based if/else construct.)
The above code demonstrates giving both pure and stateful interfaces to the same computation: pure callers can use foo without knowing that it uses mutable state internally, while ST callers can directly use fooST (and for example provide it with existing STRefs to modify).
#Sibi said it best in his comment:
I would suggest you to stop thinking imperatively and rather think in a functional way. I agree that it will take some time to getting used to the new pattern, but try to translate imperative ideas to functional languages isn't a great approach.
Practically speaking, your chain of let can be a good starting point:
foo x0 y0 =
let x1 = if a x0 then f x0 else x0 in
let x2 = if c x1 then g x1 else h x1 in
let x3 = f x2 in
let y1 = if a y0 then f y0 else y0 in
let x4 = if b y1 then g x3 else x3 in
[x4,y1]
But I would suggest using a single let and giving descriptive names to the intermediate stages.
In this example unfortunately I don't have a clue what the various x's and y's do, so I cannot suggest meaningful names. In real code you would use names such as x_normalized, x_translated, or such, instead of x1 and x2, to describe what those values really are.
In fact, in a let or where you don't really have variables: they're just shorthand names you give to intermediate results, to make it easy to compose the final expression (the one after in or before the where.)
This is the spirit behind the x_bar and x_baz below. Try to come up with names that are reasonably descriptive, given the context of your code.
foo x y =
let x_bar = if a x then f x else x
x_baz = f if c x_bar then g x_bar else h x_bar
y_bar = if a y then f y else y
x_there = if b y_bar then g x_baz else x_baz
in [x_there, y_bar]
Then you can start recognizing patterns that were hidden in the imperative code. For example, x_bar and y_bar are basically the same transformation, applied respectively to x and y: that's why they have the same suffix "_bar" in this nonsensical example; then your x2 probably doesn't need an intermediate name , since you can just apply f to the result of the entire "if c then g else h".
Going on with the pattern recognition, you should factor out the transformations that you are applying to variables into sub-lambdas (or whatever you call the auxiliary functions defined in a where clause.)
Again, I don't have a clue what the original code did, so I cannot suggest meaningful names for the auxiliary functions. In a real application, f_if_a would be called normalize_if_needed or thaw_if_frozen or mow_if_overgrown... you get the idea:
foo x y =
let x_bar = f_if_a x
y_bar = f_if_a y
x_baz = f (g_if_c_else_h x_bar)
x_there = g_if_b x_baz y_bar
in [x_there, y_bar]
where
f_if_a x
| a x = f x
| otherwise = x
g_if_c_else_h x
| c x = g x
| otherwise = h x
g_if_b x y
| b y = g x
| otherwise = x
Don't disregard this naming business.
The whole point of Haskell and other pure functional languages is to express algorithms without the assignment operator, meaning the tool that can modify the value of an existing variable.
The names you give to things inside a function definition, whether introduced as arguments, let, or where, can only refer to one value (or auxiliary function) throughout the entire definition, so that your code can be more easily reasoned about and proven correct.
If you don't give them meaningful names (and conversely giving your code a meaningful structure) then you're missing out on the entire purpose of Haskell.
(IMHO the other answers so far, citing monads and other shenanigans, are barking up the wrong tree.)
I always prefer layering state transformers to using a single state over a tuple: it definitely declutters things by letting you "focus" on a specific layer (representations of the x and y variables in our case):
import Control.Monad.Trans.Class
import Control.Monad.Trans.State
foo :: x -> y -> (x, y)
foo x y =
(flip runState) y $ (flip execStateT) x $ do
get >>= \v -> when (a v) (put (f v))
get >>= \v -> put ((if c v then g else h) v)
modify f
lift $ get >>= \v -> when (a v) (put (f v))
lift get >>= \v -> when (b v) (modify g)
The lift function allows us to focus on the inner state layer, which is y.
I have the following function to obtain power of a matrix
X^0 = identity matrix,
X^1 =X;
X^2 = X'X;
X^3 = X X' X;
X^4 = X' X X' X ......
I tried with following function:
import Numeric.Container
import Numeric.LinearAlgebra
mpow :: Field t => (Matrix t) -> Integer -> (Matrix t)
mpow x 0 = ident $ cols x
mpow x 1 = x
mpow x n =
if (mod n 2) == 0 then
multiply (trans x) (mpow x $ n - 1)
else
multiply x (mpow x $ n - 1)
Is it possible to rewrite this function without using the if-else statement ?
Yes, you could use guards. But quite often it will compile into the same internal representation in Haskell.
import Numeric.Container
import Numeric.LinearAlgebra
mpow :: Field t => (Matrix t) -> Integer -> (Matrix t)
mpow x 0 = ident $ cols x
mpow x 1 = x
mpow x n | (mod n 2) == 0 = multiply (trans x) (mpow x $ n - 1)
| otherwise = multiply x (mpow x $ n - 1)
As freyrs mentioned, guards and if statements are exactly equivalent as they are both converted to case of when you compile your code. But, you can still get rid of them:
mpow' :: Field t => (Matrix t) -> Integer -> (Matrix t)
mpow' x 0 = ident $ cols x
mpow' x 1 = x
mpow' x n = multiply (head (drop n' fs) $ x) (mpow' x $ n - 1)
where fs = [trans, id]
n' = fromInteger (mod n 2)
However, this isn't more concise, nor does it better communicate what your function is doing to the reader. So don't do this, unless you really hate conditionals.
When solving system of linear equations by Tridiagonal matrix algorithm in Haskell I met following problem.
We have three vectors: a, b and c, and we want to make a third vector c' which is a combination of them:
c'[i] = c[i] / b[i], i = 0
c'[i] = c[i] / (b[i] - a[i] * c'[i-1]), 0 < i < n - 1
c'[i] = undefined, i = n - 1
Naive implementation of the formula above in Haskell is as follows:
calcC' a b c = Data.Vector.generate n f
where
n = Data.Vector.length a
f i =
| i == 0 = c!0 / b!0
| i == n - 1 = 0
| otherwise = c!i / (b!i - a!i * f (i - 1))
It looks like this function calcC' has complexity O(n2) due to recurrence. But all we actualy need is to pass to inner function f one more parameter with previously generated value.
I wrote my own version of generate with complexity O(n) and helper function mapP:
mapP f xs = mapP' xs Nothing
where
mapP' [] _ = []
mapP' (x:xs) xp = xn : mapP' xs (Just xn)
where
xn = f x xp
generateP n f = Data.Vector.fromList $ mapP f [0 .. n-1]
As one can see, mapP acts like a standard map, but also passes to mapping function previously generated value or Nothing for first call.
My question: is there any pretty standard ways to do this in Haskell? Don't I reinvent the weel?
Thanks.
There are two standard function called mapAccumL and mapAccumR that do precisely what you want.
mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
mapAccumR :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])
Basically, they behave like a combination of fold and map.
map f = snd . mapAccumL (\_ x -> (() , f x) ()
foldl f b = fst . mapAccumL (\b x -> (f b x, () ) b
If you use Data.Array, which is lazy, you can express the recurrence directly by referring to c' while defining c'.
Following code seems to be the simplest implementation of formula above in my case:
import qualified Data.Vector.Generic as V
calcC' a b c = V.postscanl' f 0.0 $ V.zip3 a b c
where
f c' (a, b, c) = c / (b - a * c')
Thanks to the authors of Vector who added helpfull postscanl' method.