I'm trying to use Yampa for some basic system simulation like I'd do in Simulink. In this case I want to simulate a spring and damper system, introduced by this simulink tutorial. I've written the following signal functions to represent the system:
system = time >>> force >>> displacement
force = constant (m * g)
displacement = feedback (-) (velocity >>> integral) (gain $ k / m) 0
velocity = feedback (-) integral (gain $ c / m) 0
Where the feedback function creates a basic feedback loop and is implemented like this:
feedback op a b b0 = loopPre b0 inner
where inner = arr (uncurry op) >>> a >>> (identity &&& b)
Oh, and:
gain x = arr (*x)
With sensible positive constants, I get a wildly unstable system:
Is there something obviously wrong in the way I'm constructing feedback loops or applying the integration?
Change integral to imIntegral 0
displacement = feedback (-) (velocity >>> imIntegral 0) (gain $ k / m) 0
velocity = feedback (-) (imIntegral 0) (gain $ c / m) 0
From spring.hs:
Using Simulink:
Something funny is happening in the integral function, changing to imIntegral 0 gives the same curve as in matlab.
My guess is that Integral is delayed by one sample, since it doesn't have a starting value, changing the behaviour of the loop.
Related
Task is to find all two-valued numbers representable as the sum of the sqrt's of two natural numbers.
I try this:
func = [sqrt (x) + sqrt (y) | x <- [10..99], y <- [10..99], sqrt (x) `mod` 1 == 0, sqrt (y) `mod` 1 == 0]
Result:
Unresolved top-level overloading Binding : func
Outstanding context : (Integral b, Floating b)
How can I fix this?
This happens because of a conflict between these two types:
sqrt :: Floating a => a -> a
mod :: Integral a => a -> a -> a
Because you write mod (sqrt x) 1, and sqrt is constrained to return the same type as it takes, the compiler is left trying to find a type for x that simultaneously satisfies the Floating constraint of sqrt and the Integral constraint of mod. There are no types in the base library that satisfy both constraints.
A quick fix is to use mod' :: Real a => a -> a -> a:
import Data.Fixed
func = [sqrt (x) + sqrt (y) | x <- [10..99], y <- [10..99], sqrt (x) `mod'` 1 == 0, sqrt (y) `mod'` 1 == 0]
However, from the error you posted, it looks like you may not be using GHC, and mod' is probably a GHC-ism. In that case you could copy the definition (and the definition of the helper function div') from here.
But I recommend a more involved fix. The key observation is that if x = sqrt y, then x*x = y, so we can avoid calling sqrt at all. Instead of iterating over numbers and checking if they have a clean sqrt, we can iterate over square roots; their squares will definitely have clean square roots. A straightforward application of this refactoring might look like this:
sqrts = takeWhile (\n -> n*n <= 99)
. dropWhile (\n -> n*n < 10)
$ [0..]
func = [x + y | x <- sqrts, y <- sqrts]
Of course, func is a terrible name (it's not even a function!), and sqrts is a constant we could compute ourselves, and is so short we should probably just inline it. So we might then simplify to:
numberSums = [x + y | x <- [4..9], y <- [4..9]]
At this point, I would be wondering whether I really wanted to write this at all, preferring just
numberSums = [8..18]
which, unlike the previous iteration, doesn't have any duplicates. It has lost all of the explanatory power of why this is an interesting constant, though, so you would definitely want a comment.
-- sums of pairs of numbers, each of whose squares lies in the range [10..99]
numberSums = [8..18]
This would be my final version.
Also, although the above definitions were not parameterized by the range to search for perfect squares in, all the proposed refactorings can be applied when that is a parameter; I leave this as a good exercise for the reader to check that they have understood each change.
Easier to show than to explain. I have this tiny function to do base conversion from base 10:
demode 0 _ = []
demode n b = m:(demode d b)
where (d, m) = divMod n b
So, if we want to see how we would write 28 in base 9, demode 28 9 = [1,3].
But, of course, we have then to invert the list so it looks like a 31.
This could be easily made by making a function that calls 'demode' and then reverses it result, but with Haskell being so cool and all that there's probably a more elegant way of saying "in the end case (demode 0 _), append everything to a list and then reverse the list".
Note that base conversion is just an example I'm using to illustrate the question, the real question is how to apply a final transformation to the last result of a recursive function.
Nope. Your only hope is to use a helper function. Note that Haskell does allow you to define functions in where clauses (at least for now), so that doesn't have to be a 'separate function' in the sense of a separate top-level definition. You have basically two choices:
Add an accumulator and do whatever work you want to do in the end:
demode n b = w n [] where
w 0 xn = reverse xn
w n xn = w d (xn ++ [m]) where
(d, m) = divMod n b
Hopefully you can follow how that would work, but note that, in this case, you are far better off saying
demode n b = w n [] where
w 0 xn = xn
w n xn = w d (m : xn) where
(d, m) = divMod n b
which builds the list in reversed order and returns that.
Push the regular definition down to a helper function, and wrap that function in whatever work you want:
demode n b = reverse (w n) where
w 0 = []
w n = m : w d where
(d, m) = divMod n b
(I've used the term w as a short-hand for 'worker' in all three examples).
Either case can generally benefit from learning to do your recursions using higher-order functions, instead.
In general, it's somewhat bad style in Haskell to try to do 'everything in one function'; Haskell style is built around dividing a problem into multiple parts, solving those with separate functions, and composing the resulting functions together; especially if those functions will be useful elsewhere as well (which happens more often than you might naively expect).
a newbie and have made this code in winHugs Haskell to check the maximum exponent k can be induced, that 2 power k is the divisor of n:
maxexp2 ::Int -> Int
maxexp2 n
| n==0 || 2^k`mod`n /= 0 = 0
| otherwise == k
where k = e `div` f
e = round (fromIntegral(log n))
f = round (fromIntegral (log 2))
Somehow I know there is something fishy with the e and f part ...the error says : fractional Int instance is needed for the execution of this function.
I don't really understand that...can someone please explain it to me ?
Obviously, the logarithm of almost all numbers is something irrational, so using fromIntegral on it should indeed seem fishy to you. Why do you even think you need it on that point? Where you do need it is before the logarithm, because that actually also accepts only values of the Floating class, as we can easily find out1:
Prelude> :t log
log :: Floating a => a -> a
So it hase to be e = round (log $ fromIntegral n). You don't need it for f at all, because 2 is just a generic Num literal, not an Int. Obviously though, it can't be right to just round the log 2, because that is simply 1. What you probably want is round $ e / f with floating-point e and f.
1I don't know if this works in Hugs; probably. If not, use GHCi (why does everybody new seem to use Hugs at the moment anyway? I don't know why you would use anything else but GHC).
I'm trying to generate random masses for hypothetical planets in Haskell. I want to produce these masses by sampling a bi-modal distribution (ideally the superposition of two normal distributions: one corresponding to small planets and one corresponding to gas giants). I've looked at the statistics package, which provides the quantile function, which can turn a uniformly distributed Double into a Double on a number of distributions. But there doesn't seem to be any support for composing distributions.
This particular case could be hacked around by picking one distribution or the other to sample before-hand, but I'd like to do it with a single distribution, especially since I might need to tweak the overall distribution later. Eventually I might replace the normal distribution with real data from sky surveys.
I'm considering implementing rejection sampling myself, which can handle arbitrary distributions fairly simply, but it seems rather inefficient, and it certainly wouldn't be a good idea to implement it if a solution exists already as a library.
Is there a Haskell library that supports sampling from composed or explicitly specified distributions? Or an existing Haskell implementation of rejection sampling? Alternatively, is there an explicit formula for the inverse of the CDF of the sum of two normal distributions?
In the case of a simple mixture of distributions, you can get an efficient sampler via the 'hack' you first mentioned:
This particular case could be hacked around by picking one distribution or the other to sample before-hand, but I'd like to do it with a single distribution, especially since I might need to tweak the overall distribution later.
This is actually a case of Gibbs sampling, which is very prevalent in statistics. It's very flexible, and if you know the number of mixtures you're using, it will probably be hard to beat. Choose one individual distribution from the entire ensemble to sample from, and then sample from that conditional distribution. Rinse and repeat.
Here's a simple, unoptimized Haskell implementation for a mixture-of-Gaussians Gibbs sampler. It's pretty basic, but you get the idea:
import System.Random
import Control.Monad.State
type ModeList = [(Double, Double)] -- A list of mean/stdev pairs, for each mode.
-- Generate a Gaussian (0, 1) variate.
boxMuller :: StdGen -> (Double, StdGen)
boxMuller gen = (sqrt (-2 * log u1) * cos (2 * pi * u2), gen'')
where (u1, gen') = randomR (0, 1) gen
(u2, gen'') = randomR (0, 1) gen'
sampler :: ModeList -> State StdGen Double
sampler modeInfo = do
gen <- get
let n = length modeInfo
(z0, g0) = boxMuller gen
(c, g1) = randomR (0, n - 1) g0 -- Sample from the components.
(cmu, csig) = modeInfo !! c
put g1
return $ cmu + csig * z0 -- Sample from the conditional distribution.
Here's a example run: sampling 100 times from a one-dimensional mixture of two Gaussians. The modes are at x = -3 and x = 2.5, and each mixture component has its own separate variance. You could add as many modes as you want here.
main = do
let gen = mkStdGen 42
modeInfo = [(2.5, 1.0), (-3, 1.5)]
samples = (`evalState` gen) . replicateM 100 $ sampler modeInfo
print samples
Here's a smoothed density plot of those 100 samples (using R and ggplot2):
A more general purpose algorithm would be a rejection or importance sampler, and in the case of more complicated distributions you're probably going to want to hand-roll an appropriate MCMC routine. Here is a good introduction to Monte Carlo and MCMC.
Hmmmm. The best way I'm familiar with is to adapt the MonadRandom package to get a "probability monad", borrowing some tools from http://en.wikipedia.org/wiki/Normal_distribution#Generating_values_from_normal_distribution :
getRandomStrictlyBetween :: (Ord a, Random a, RandomGen m) =>
(a, a) -> a
getRandomStrictlyBetween (lo, hi) = do
x <- getRandomR (lo, hi)
-- x is uniformly randomly chosen from the *closed* interval
if lo < x && x < hi then return x else getRandomStrictlyBetween (lo, hi)
normalValue :: MonadRandom m => m Double
normalValue = do
u <- getRandomStrictlyBetween (0, 1)
v <- getRandomStrictlyBetween (0, 2 * pi)
return (sqrt (-2 * log u) * cos v) -- according to Wikipedia
and then you can derive more or less arbitrary distributions; for example, to get the distribution of a random variable that is y with probability p and z with probability (1 - p), you just write
do alpha <- getRandom -- double chosen from [0, 1)
if alpha < p then y else z
of which bimodal distributions appear to be a special case. To sample from these distributions, just do evalRandIO distribution to sample in the IO monad.
I was a bit confused by the documentation for fix (although I think I understand what it's supposed to do now), so I looked at the source code. That left me more confused:
fix :: (a -> a) -> a
fix f = let x = f x in x
How exactly does this return a fixed point?
I decided to try it out at the command line:
Prelude Data.Function> fix id
...
And it hangs there. Now to be fair, this is on my old macbook which is kind of slow. However, this function can't be too computationally expensive since anything passed in to id gives that same thing back (not to mention that it's eating up no CPU time). What am I doing wrong?
You are doing nothing wrong. fix id is an infinite loop.
When we say that fix returns the least fixed point of a function, we mean that in the domain theory sense. So fix (\x -> 2*x-1) is not going to return 1, because although 1 is a fixed point of that function, it is not the least one in the domain ordering.
I can't describe the domain ordering in a mere paragraph or two, so I will refer you to the domain theory link above. It is an excellent tutorial, easy to read, and quite enlightening. I highly recommend it.
For the view from 10,000 feet, fix is a higher-order function which encodes the idea of recursion. If you have the expression:
let x = 1:x in x
Which results in the infinite list [1,1..], you could say the same thing using fix:
fix (\x -> 1:x)
(Or simply fix (1:)), which says find me a fixed point of the (1:) function, IOW a value x such that x = 1:x... just like we defined above. As you can see from the definition, fix is nothing more than this idea -- recursion encapsulated into a function.
It is a truly general concept of recursion as well -- you can write any recursive function this way, including functions that use polymorphic recursion. So for example the typical fibonacci function:
fib n = if n < 2 then n else fib (n-1) + fib (n-2)
Can be written using fix this way:
fib = fix (\f -> \n -> if n < 2 then n else f (n-1) + f (n-2))
Exercise: expand the definition of fix to show that these two definitions of fib are equivalent.
But for a full understanding, read about domain theory. It's really cool stuff.
I don't claim to understand this at all, but if this helps anyone...then yippee.
Consider the definition of fix. fix f = let x = f x in x. The mind-boggling part is that x is defined as f x. But think about it for a minute.
x = f x
Since x = f x, then we can substitute the value of x on the right hand side of that, right? So therefore...
x = f . f $ x -- or x = f (f x)
x = f . f . f $ x -- or x = f (f (f x))
x = f . f . f . f . f . f . f . f . f . f . f $ x -- etc.
So the trick is, in order to terminate, f has to generate some sort of structure, so that a later f can pattern match that structure and terminate the recursion, without actually caring about the full "value" of its parameter (?)
Unless, of course, you want to do something like create an infinite list, as luqui illustrated.
TomMD's factorial explanation is good. Fix's type signature is (a -> a) -> a. The type signature for (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) is (b -> b) -> b -> b, in other words, (b -> b) -> (b -> b). So we can say that a = (b -> b). That way, fix takes our function, which is a -> a, or really, (b -> b) -> (b -> b), and will return a result of type a, in other words, b -> b, in other words, another function!
Wait, I thought it was supposed to return a fixed point...not a function. Well it does, sort of (since functions are data). You can imagine that it gave us the definitive function for finding a factorial. We gave it a function that dind't know how to recurse (hence one of the parameters to it is a function used to recurse), and fix taught it how to recurse.
Remember how I said that f has to generate some sort of structure so that a later f can pattern match and terminate? Well that's not exactly right, I guess. TomMD illustrated how we can expand x to apply the function and step towards the base case. For his function, he used an if/then, and that is what causes termination. After repeated replacements, the in part of the whole definition of fix eventually stops being defined in terms of x and that is when it is computable and complete.
You need a way for the fixpoint to terminate. Expanding your example it's obvious it won't finish:
fix id
--> let x = id x in x
--> id x
--> id (id x)
--> id (id (id x))
--> ...
Here is a real example of me using fix (note I don't use fix often and was probably tired / not worrying about readable code when I wrote this):
(fix (\f h -> if (pred h) then f (mutate h) else h)) q
WTF, you say! Well, yes, but there are a few really useful points here. First of all, your first fix argument should usually be a function which is the 'recurse' case and the second argument is the data on which to act. Here is the same code as a named function:
getQ h
| pred h = getQ (mutate h)
| otherwise = h
If you're still confused then perhaps factorial will be an easier example:
fix (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) 5 -->* 120
Notice the evaluation:
fix (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) 3 -->
let x = (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x in x 3 -->
let x = ... in (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x 3 -->
let x = ... in (\d -> if d > 0 then d * (x (d-1)) else 1) 3
Oh, did you just see that? That x became a function inside our then branch.
let x = ... in if 3 > 0 then 3 * (x (3 - 1)) else 1) -->
let x = ... in 3 * x 2 -->
let x = ... in 3 * (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x 2 -->
In the above you need to remember x = f x, hence the two arguments of x 2 at the end instead of just 2.
let x = ... in 3 * (\d -> if d > 0 then d * (x (d-1)) else 1) 2 -->
And I'll stop here!
How I understand it is, it finds a value for the function, such that it outputs the same thing you give it. The catch is, it will always choose undefined (or an infinite loop, in haskell, undefined and infinite loops are the same) or whatever has the most undefineds in it. For example, with id,
λ <*Main Data.Function>: id undefined
*** Exception: Prelude.undefined
As you can see, undefined is a fixed point, so fix will pick that. If you instead do (\x->1:x).
λ <*Main Data.Function>: undefined
*** Exception: Prelude.undefined
λ <*Main Data.Function>: (\x->1:x) undefined
[1*** Exception: Prelude.undefined
So fix can't pick undefined. To make it a bit more connected to infinite loops.
λ <*Main Data.Function>: let y=y in y
^CInterrupted.
λ <*Main Data.Function>: (\x->1:x) (let y=y in y)
[1^CInterrupted.
Again, a slight difference. So what is the fixed point? Let us try repeat 1.
λ <*Main Data.Function>: repeat 1
[1,1,1,1,1,1, and so on
λ <*Main Data.Function>: (\x->1:x) $ repeat 1
[1,1,1,1,1,1, and so on
It is the same! Since this is the only fixed point, fix must settle on it. Sorry fix, no infinite loops or undefined for you.
As others pointed out, fix somehow captures the essence of recursion. Other answers already do a great job at explaining how fix works. So let's take a look from another angle and see how fix can be derived by generalising, starting from a specific problem: we want to implement the factorial function.
Let's first define a non recursive factorial function. We will use it later to "bootstrap" our recursive implementation.
factorial n = product [1..n]
We approximate the factorial function by a sequence of functions: for each natural number n, factorial_n coincides with factorial up to and including n. Clearly factorial_n converges towards factorial for n going towards infinity.
factorial_0 n = if n == 0 then 1 else undefined
factorial_1 n = n * factorial_0(n - 1)
factorial_2 n = n * factorial_1(n - 1)
factorial_3 n = n * factorial_2(n - 1)
...
Instead of writing factorial_n out by hand for every n, we implement a factory function that creates these for us. We do this by factoring the commonalities out and abstracting over the calls to factorial_[n - 1] by making them a parameter to the factory function.
factorialMaker f n = if n == 0 then 1 else n * f(n - 1)
Using this factory, we can create the same converging sequence of functions as above. For each factorial_n we need to pass a function that calculates the factorials up to n - 1. We simply use the factorial_[n - 1] from the previous step.
factorial_0 = factorialMaker undefined
factorial_1 = factorialMaker factorial_0
factorial_2 = factorialMaker factorial_1
factorial_3 = factorialMaker factorial_2
...
If we pass our real factorial function instead, we materialize the limit of the series.
factorial_inf = factorialMaker factorial
But since that limit is the real factorial function we have factorial = factorial_inf and thus
factorial = factorialMaker factorial
Which means that factorial is a fixed-point of factorialMaker.
Finally we derive the function fix, which returns factorial given factorialMaker. We do this by abstracting over factorialMaker and make it an argument to fix. (i.e. f corresponds to factorialMaker and fix f to factorial):
fix f = f (fix f)
Now we find factorial by calculating the fixed-point of factorialMaker.
factorial = fix factorialMaker