In the mathematical languages, you can create a vector as follows:
x = seq(0, 2*pi, length.out = 100)
This outputs:
[1] 0.00000000 0.06346652 0.12693304 0.19039955 0.25386607 0.31733259 0.38079911
[8] 0.44426563 0.50773215 0.57119866 0.63466518 0.69813170 0.76159822 0.82506474
[15] 0.88853126 0.95199777 1.01546429 1.07893081 1.14239733 1.20586385 1.26933037
[22] 1.33279688 1.39626340 1.45972992 1.52319644 1.58666296 1.65012947 1.71359599
[29] 1.77706251 1.84052903 1.90399555 1.96746207 2.03092858 2.09439510 2.15786162
[36] 2.22132814 2.28479466 2.34826118 2.41172769 2.47519421 2.53866073 2.60212725
[43] 2.66559377 2.72906028 2.79252680 2.85599332 2.91945984 2.98292636 3.04639288
[50] 3.10985939 3.17332591 3.23679243 3.30025895 3.36372547 3.42719199 3.49065850
[57] 3.55412502 3.61759154 3.68105806 3.74452458 3.80799110 3.87145761 3.93492413
[64] 3.99839065 4.06185717 4.12532369 4.18879020 4.25225672 4.31572324 4.37918976
[71] 4.44265628 4.50612280 4.56958931 4.63305583 4.69652235 4.75998887 4.82345539
[78] 4.88692191 4.95038842 5.01385494 5.07732146 5.14078798 5.20425450 5.26772102
[85] 5.33118753 5.39465405 5.45812057 5.52158709 5.58505361 5.64852012 5.71198664
[92] 5.77545316 5.83891968 5.90238620 5.96585272 6.02931923 6.09278575 6.15625227
[99] 6.21971879 6.28318531
How can this be achieved in Haskell?
I tried creating a lambda function and using it with map, but I could n't get the same output.
Thanks
let myPi = (\x -> 2*pi)
map myPi [1..10]
Well, you can just do
[0, 2*pi/100 .. 2*pi]
Note that this is not ideal both performance- and floating-point-rounding–wise (because it translates to enumFromThenTo), Daniel Fischer's version is better (it translates to enumFromTo). Thinking it over, GHC will probably compile both to almost equally-fast code, but I'm not sure. If it's really performance-critical, it's best not to use lists at all but e.g. Data.Vector.
As Jakub Hampl remarked, Haskell can deal with infinite lists. That's probably not much use to you here, but it opens interesting possibilties – for instance, you might not be sure which resolution you actually need. You can let your list begin with a very low resolution, then fold back and start again with a higher one. One simple way to achieve this:
import Data.Fixed
multiResS₁ = [ log x `mod'` 2*pi | x<-[1 .. ] ]
using this to plot the sine function looks like this
Prelude Data.Fixed Graphics.Rendering.Chart.Simple> let domainS₁ = take 200 multiResS₁
Prelude Data.Fixed Graphics.Rendering.Chart.Simple> plotPNG "multiresS1.png" domainS₁ sin
Easiest is a list comprehension,
[(2*pi)*k/99 | k <- [0 .. 99]]
(the multiplication with k/99 mitigates the floating point rounding, so the last value is exactly 2*pi.)
Related
I have two Repa arrays a1 and a2 and I would like to eliminate all the elements in a2 for which the corresponding index in a1 is above a certain threshold. For example:
import qualified Data.Array.Repa as R -- for Repa
import Data.Array.Repa (Z (..), (:.)(..))
a1 = R.fromFunction (Z :. 4) $ \(Z :. x) -> [8, 15, 9, 14] ! x
a2 = R.fromFunction (Z :. 4) $ \(Z :. x) -> [0, 1, 2, 3] ! x
threshold = 10
desired = R.fromFunction (Z :. 2) $ \(Z :. x) -> [0, 2] ! x
-- 15 and 14 are above the threshold, 10
One way to do this is with selectP but I would like to avoid using this, since it computes the arrays, and I would like my arrays to remain in delayed form, if possible.
Another way is with the repa-array, but stack solver does not seem to know how to import this library with resolver nightly-2017-04-10.
One way to look at this issue is that, in order to create a Repa Array, you need to know the size (extent) of the Array upon creation (eg. fromFunction), but, in case of filter operation, there is no way to know the size of the resulting Array in repa without applying a thresholding predicate, essentially computing values of the resulting Array.
Another way to look at it is, Delayed array is a simple function from an index to a value, which is fine for most operations. For filtering though, when you apply a predicate, in order to find a value at a particular index, you now need to know all values that come before that index in the resulting array, cause for any location, a value may be there, maybe not.
vector package solves this issue elegantly with stream fusion, and repa-array, next version of Repa, which is still in experimental stage, seems to be trying to use a similar approach, except with extention to higher dimensions (I might be wrong, haven't looked too closely).
So, short answer, there is no way to do filtering with Repa style functional fusion. Either:
stick to selectP - faster (probably), but less memory efficient (for sure), or
piggy back onto ifilter from vector package for sequential
filtering
You can build a list of pairs with zip, then filter by a predicate function with the type (Int,Int) -> Bool and lastly extract the first or second element of the pair (depending on which one you want) by using map fst or map snd respectively. Everything you need for this is in Prelude.
I hope this is enough information so you can put the pieces together yourself. If in doubt, look at the type signatures of the functions i mentioned.
This is an example from Learn you a Haskell:
ghci> [ x*y | x <- [2,5,10], y <- [8,10,11], x*y > 50]
[55,80,100,110]
So, what's going on here, will x*y be calculated twice or once?
It would be calculated twice unless common subexpression elimination occurs.
Depending on inlining and your optimization level, GHC may do quite aggressive things with the list comprehension.
In general, you should explicitly share common expressions to guarantee sharing.
To be sure of the compiler's behaviour, prefer:
[ product | x <- [2, 5, 10]
, y <- [8, 10, 11]
, let product = x * y
, product > 50]
Looking into the core when compiled with -O2 option it has following lines (relevant and simplified)
case (y_aAD * sc_s1Rq) > 50 of
False -> go_XB2 sc1_s1Rr;
True -> (y_aAD * sc_s1Rq):(go_XB2 sc1_s1Rr)
This clearly shows that the multiplication is calculated twice, so it is better use common expression to prevent recomputation.
Concatenative languages have some very intriguing characteristics, such as being able to compose functions of different arity and being able to factor out any section of a function. However, many people dismiss them because of their use of postfix notation and how it's tough to read. Plus the Polish probably don't appreciate people using their carefully crafted notation backwards.
So, is it possible to have prefix notation? If it is, what would the tradeoffs be?
I have an idea of how it could work, but I'm not experienced with concatenative languages so I'm probably missing something. Basically, a function would be evaluated in reverse order and values would be pulled from the stack in reverse order. To demonstrate this, I'll compare postfix to what prefix would look like. Here are some concatenative expressions with the traditional postfix notation.
5 dup * ! Multiply 5 by itself
3 2 - ! Subtract 2 from 3
(1, 2, 3, 4, 5) [2 >] filter length ! Get the number of integers from 1 to 5
! that are greater than 2
The expressions are evaluated from left to right: in the first example, 5 is pushed on the stack, then dup duplicates the top value on the stack, then * multiplies the top two values on the stack. Functions pull their last argument first from the stack: in the second example, when - is called, 2 is at the top of the stack, but it is the last argument.
Here is what I think prefix notation would look like:
* dup 5
- 3 2
length filter (1, 2, 3, 4, 5) [< 2]
The expressions are evaluated from right to left, and functions pull their first argument first from the stack. Note how the prefix filter example reads much more closely to its description and looks similar to the applicative style. One issue I noticed is factoring things out might not be as useful. For example, in postfix notation you can factor out 2 - from 3 2 - to create a subtractTwo function. In prefix notation you can factor out - 3 from - 3 2 to create a subtractFromThree function, which doesn't seem as useful.
Barring any glaring issues, perhaps a concatenative language that uses prefix notation could win over the people who dislike postfix notation. Any insight is appreciated.
Well certainly, if your words are still fixed-arity then it's just a matter of executing tokens right to left.
It's only because of n-arity functions that prefix notation implies parenthesis, and it's only because of wanting human "reading order" to match execution order that being a stack language implies postfix.
I'm writing such a language right now as it happens, and so far I like some of the side-effects of using prefix notation. The semantics are based on Joy:
Files are parsed from left to right, but executed from right to left.
By extension, definitions must come after the point at which they are used.
As a nice side-effect, comments are simply lists which are dropped.
Here's the factorial function, for instance:
def 'fact [cond [* fact - 1 dup] [1 drop] dup]
I also find it easier to reason about the code as I'm writing it, but I don't have a strong background in concatenative languages. Here's my (probably-naive) derivation of the map function over lists. The 'nb' function drops something and is used for comments. 'stash [f]' pops into a temp, runs 'f' on the rest of the stack, then pushes the temp back on.
def 'map [q [cons map stash [head swap i] dup stash [tail dup]] [nb] is_cons nip]
nb [map [f] (cons x y) -> cons map [f] x f y
stash [tail dup] [f] (cons x y) = [f] y (cons x y)
dup [f] y (cons x y) = [f] [f] y (cons x y)
stash [head swap i] [f] [f] y (cons x y) = [f] x (f y)
cons map [f] x (f y) = cons map [f] x f y
map [f] [] -> []]
I just came from reading about the Om Language
Seems just what you are talking about. From it's description (emphasis mine):
The Om language is:
a novel, maximally-simple concatenative, homoiconic programming and algorithm notation language with:
minimal syntax, comprised of only three elements.
prefix notation, in which functions manipulate the remainder of the program itself. [...]
It also states that it's not finished, and will experience much change yet.
Still, it seems to be working, and really interesting as proof of concept.
I imagine a concatenative prefix language without stack. It could call functions, which would then themselves interpret code until they got all needed operands. Interpreter would then call next function. It would only need one memory construct - the result. Everything else could be read from the source code at time of execution. As you might have noticed, I am talking about interpreted language, not compiled one.
I've just started playing with GHCi. I see that list generators basically solve an equation within a given set:
Prelude> [x | x <- [1..20], x^2 == 4]
[2]
(finds only one root, as expected)
Now, why can't I solve equations with results in ℝ, given that the solution is included in the specified range?
[x | x <- [0.1,0.2..2.0], x*4 == 2]
How can I solve such equations within real numbers set?
Edit: Sorry, I meant 0.1, of course.
List comprehension doesn't solve equations, it just generates a list of items that belong to certain sets. If your set is defined as any x in [1..20] such that x^2==4, that's what you get.
You cannot do that with a complete list of any real number from 0.01 to 2.0, because such real list cannot be represented in haskell (or better: it cannot be represented on any computer), since it has infinite numbers with infinite precision.
[0.01,0.2..2.0] is a list made of the following numbers:
Prelude> [0.01,0.2..2.0]
[1.0e-2,0.2,0.39,0.5800000000000001,0.7700000000000001,0.9600000000000002,1.1500000000000004,1.3400000000000005,1.5300000000000007,1.7200000000000009,1.910000000000001]
And none of these numbers satisfies your condution.
Note that you probably meant [0.1,0.2..2.0] instead of [0.01,0.2..2.0]. Still:
Prelude> [0.1,0.2..2.0]
[0.1,0.2,0.30000000000000004,0.4000000000000001,0.5000000000000001,0.6000000000000001,0.7000000000000001,0.8,0.9,1.0,1.1,1.2000000000000002,1.3000000000000003,1.4000000000000004,1.5000000000000004,1.6000000000000005,1.7000000000000006,1.8000000000000007,1.9000000000000008,2.000000000000001]
As others have mentioned, this is not an efficient way to solve equations, but it can be done with ratios.
Prelude> :m +Data.Ratio
Prelude Data.Ratio> [x|x<-[1%10, 2%10..2], x*4 == 2]
[1 % 2]
Read x % y as x divided by y.
The floating point issue can be solved in this way:
Prelude> [x | x <- [0.1, 0.2 .. 2.0], abs(2 - x*4) < 1e-9]
[0.5000000000000001]
For a reference why floating point numbers can make problems see this: Comparing floating point numbers
First of all [0.01,0.2..2.0] wouldn't include 0.5 even if floating point arithmetic were accurate. I assume you meant the first element to be 0.1.
The list [0.1,0.2..2.0] does not contain 0.5 because floating point arithmetic is imprecise and the 5th element of [0.1,0.2..2.0] is 0.5000000000000001, not 0.5.
The default
[1..5]
gives this
[1,2,3,4,5]
and can also be done with the range function. Is it possible to change the step size between the points, so that I could get something like the following instead?
[1,1.5,2,2.5,3,3.5,4,4.5,5]
[1,1.5..5]
You have to be careful with floating point arithmetic. It can't represent 1.1 precisely, so if you try
Prelude> [0,0.1 .. 1]
[0.0,0.1,0.2,0.30000000000000004,0.4,0.5,0.6,0.7,0.7999999999999999,0.8999999999999999,0.9999999999999999]
Best way is more like:
Prelude> map (/10) [0..10]
[0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]
Actually, [1..5] is syntactic sugar for
enumFromTo 1 5
and [1,1.5..5] for
enumFromThenTo 1 1.5 5
For more information, see http://en.wikibooks.org/wiki/Haskell/Syntactic_sugar
I just want to elaborate on some of the answers above. As #mattiast correctly mentioned,
[start, abs(start - stepSize) .. end] is really just syntactic sugar for:
enumFromThenTo start abs(start - stepSize) end
However, notice that the middle value (in your case, the "1.5" is not the step size, but what the value should be were it's magnitude from the start to be computed.
So if you want to decrement in steps of 0.2, then we would need to do [2,1.8..1] since abs(2 - 1.8) == 0.2