Guarantee of sameness of output after switching order in functional programming - haskell

I started reading some of Haskell's documentation, and there's a fundamental concept I just don't understand. I read about it in other places as well, but I want to understand it once and for all.
In many places discussing functional programing, I keep reading that if the functions you're using are pure (have no side effects, and give same response for the same input at every call) then you can switch the order in which they are called when composing them, with it being guaranteed that the output of this composed call will remain the same regardless of the order.
For example, here is an entry from the Haskell Wiki:
Haskell is a pure language, which means that the result of any
function call is fully determined by its arguments. Pseudo-functions
like rand() or getchar() in C, which return different results on each
call, are simply impossible to write in Haskell. Moreover, Haskell
functions can't have side effects, which means that they can't effect
any changes to the "real world", like changing files, writing to the
screen, printing, sending data over the network, and so on. These two
restrictions together mean that any function call can be replaced by
the result of a previous call with the same parameters, and the
language guarantees that all these rearrangements will not change the
program result!
But when I fiddle with this idea I can quickly think of examples that contradict the statement above. For instance, let's say I have two functions (I will use pseudo code rather than Haskell):
x(a)->a+3
y(a)->a*3
z(a)->x(y(a))
w(a)->y(x(a))
Now, if we execute z and w, we get:
z(5) //gives 3*5+3=18
w(5) //gives (5+3)*3=24
That being so, I think I misunderstood the promised guarantee they speak about. Can anybody explain it to me?

When you compare x(y(a)) to y(x(a)), those two expressions are not equivalent because x and y aren't called with the same arguments in each. In the first expression x is called with the argument y(a) and y is called with the argument a. Whereas in the second y is called with x(a), not a, as its argument and x is called with a, not y(a). So: different arguments, (possibly) different results.
When people say that the order does not matter, they mean that in the following code:
a = f(x)
b = g(y)
you can switch the definition of a and b without affecting their values. That is it makes no difference whether f is called before g or vice versa. This is clearly not true for the following code:
a = getchar()
b = getchar()
If you switch a and b here, their values are switched as well, because getchar returns a (possibly) different character each time that it's called. So a purely functional language can't have a function exactly like getchar.

Related

Is there a fast way of going from a symbol to a function call in Julia? [duplicate]

This question already has an answer here:
Julia: invoke a function by a given string
(1 answer)
Closed 6 years ago.
I know that you can call functions using their name as follows
f = x -> println(x)
y = :f
eval(:($y("hi")))
but this is slow since it is using eval is it possible to do this in a different way? I know it's easy to go the other direction by just doing symbol(f).
What are you trying to accomplish? Needing to eval a symbol sounds like a solution in search of a problem. In particular, you can just pass around the original function, thereby avoiding issues with needing to track the scope of f (or, since f is just an ordinary variable in your example, the possibility that it would get reassigned), and with fewer characters to type:
f = x -> println(x)
g = f
g("hi")
I know it's easy to go the other direction by just doing symbol(f).
This is misleading, since it's not actually going to give you back f (that transform would be non-unique). But it instead gives you the string representation for the function (which might happen to be f, sometimes). It is simply equivalent to calling Symbol(string(f)), since the combination is common enough to be useful for other purposes.
Actually I have found use for the above scenario. I am working on a simple form compiler allowing for the convenient definition of variational problems as encountered in e.g. finite element analysis.
I am relying on the Julia parser to do an initial analysis of the syntax. The equations entered are valid Julia syntax, but will trigger errors on execution because some of the symbols or methods are not available at the point of the problem definition.
So what I do is roughly this:
I have a type that can hold my problem description:
type Cmd f; a; b; end
I have defined a macro so that I have access to the problem description AST. I travers this expression and create a Cmd object from its elements (this is not completely unlike the strategy behind the #mat macro in MATLAB.jl):
macro m(xp)
c = Cmd(xp.args[1], xp.args[3], xp.args[2])
:($c)
end
At a later step, I run the Cmd. Evaluation of the symbols happens only at this stage (yes, I need to be careful of the evaluation context):
function run(c::Cmd)
xp = Expr(:call, c.f, c.a, c.b)
eval(xp)
end
Usage example:
c = #m a^b
...
a, b = 2, 3
run(c)
which returns 9. So in short, the question is relevant in at least some meta-programming scenarios. In my case I have to admit I couldn't care less about performance as all of this is mere preprocessing and syntactic sugar.

Accumulator factory in Haskell

Now, at the start of my adventure with programming I have some problems understanding basic concepts. Here is one related to Haskell or perhaps generally functional paradigm.
Here is a general statement of accumulator factory problem, from
http://rosettacode.org/wiki/Accumulator_factory
[Write a function that]
Takes a number n and returns a function (lets call it g), that takes a number i, and returns n incremented by the accumulation of i from every call of function g(i).
Works for any numeric type-- i.e. can take both ints and floats and returns functions that can take both ints and floats. (It is not enough simply to convert all input to floats. An accumulator that has only seen integers must return integers.) (i.e., if the language doesn't allow for numeric polymorphism, you have to use overloading or something like that)
Generates functions that return the sum of every number ever passed to them, not just the most recent. (This requires a piece of state to hold the accumulated value, which in turn means that pure functional languages can't be used for this task.)
Returns a real function, meaning something that you can use wherever you could use a function you had defined in the ordinary way in the text of your program. (Follow your language's conventions here.)
Doesn't store the accumulated value or the returned functions in a way that could cause them to be inadvertently modified by other code. (No global variables or other such things.)
with, as I understand, a key point being:
"[...] creating a function that [...]
Generates functions that return the sum of every number ever passed to them, not just the most recent. (This requires a piece of state to hold the accumulated value, which in turn means that pure functional languages can't be used for this task.)"
We can find a Haskell solution on the same website and it seems to do just what the quote above says.
Here
http://rosettacode.org/wiki/Category:Haskell
it is said that Haskell is purely functional.
What is then the explanation of the apparent contradiction? Or maybe there is no contradiction and I simply lack some understanding? Thanks.
The Haskell solution does not actually quite follow the rules of the challenge. In particular, it violates the rule that the function "Returns a real function, meaning something that you can use wherever you could use a function you had defined in the ordinary way in the text of your program." Instead of returning a real function, it returns an ST computation that produces a function that itself produces more ST computations. Within the context of an ST "state thread", you can create and use mutable references (STRef), arrays, and vectors. However, it's impossible for this mutable state to "leak" outside the state thread to contaminate pure code.

Call by need: When is it used in Haskell?

http://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_need says:
"Call-by-need is a memoized version of call-by-name where, if the function argument is evaluated, that value is stored for subsequent uses. [...] Haskell is the most well-known language that uses call-by-need evaluation."
However, the value of a computation is not always stored for faster access (for example consider a recursive definition of fibonacci numbers). I asked someone on #haskell and the answer was that this memoization is done automatically "only in one instance, e.g. if you have `let foo = bar baz', foo will be evaluated once".
My questions is: What does instance exactly mean, are there other cases than let in which memoization is done automatically?
Describing this behavior as "memoization" is misleading. "Call by need" just means that a given input to a function will be evaluated somewhere between 0 and 1 times, never more than once. (It could be partially evaluated as well, which means the function only needed part of that input.) In contrast, "call by name" is simply expression substitution, which means if you give the expression 2 + 3 as an input to a function, it may be evaluated multiple times if the input is used more than once. Both call by need and call by name are non-strict: if the input is not used, then it is never evaluated. Most programming languages are strict, and use a "call by value" approach, which means that all inputs are evaluated before you begin evaluating the function, whether or not the inputs are used. This all has nothing to do with let expressions.
Haskell does not perform any automatic memoization. Let expressions are not an example of memoization. However, most compilers will evaluate let bindings in a call-by-need-esque fashion. If you model a let expression as a function, then the "call by need" mentality does apply:
let foo = expression one in expression two that uses foo
==>
(\foo -> expression two that uses foo) (expression one)
This doesn't correctly model recursive bindings, but you get the idea.
The haskell language definition does not define when, or how often, code is invoked. Infinite loops are defined in terms of 'the bottom' (written ⊥), which is a value (which exists within all types) that represents an error condition. The compiler is free to make its own decisions regarding when and how often to evaluate things as long as the program (and presence/absence of error conditions, including infinite loops!) behaves according to spec.
That said, the usual way of doing this is that most expressions generate 'thunks' - basically a pointer to some code and some context data. The first time you attempt to examine the result of the expression (ie, pattern match it), the thunk is 'forced'; the pointed-to code is executed, and the thunk overwritten with real data. This in turn can recursively evaluate other thunks.
Of course, doing this all the time is slow, so the compiler usually tries to analyze when you'd end up forcing a thunk right away anyway (ie, when something is 'strict' on the value in question), and if it finds this, it'll skip the whole thunk thing and just call the code right away. If it can't prove this, it can still make this optimization as long as it makes sure that executing the thunk right away can't crash or cause an infinite loop (or it handles these conditions somehow).
If you don't want to have to get very technical about this, the essential point is that when you have an expression like some_expensive_computation of all these arguments, you can do whatever you want with it; store it in a data structure, create a list of 53 copies of it, pass it to 6 other functions, etc, and then even return it to your caller for the caller to do whatever it wants with it.
What Haskell will (mostly) do is evaluate it at most once; if it the program ever needs to know something about what that expression returned in order to make a decision, then it will be evaluated (at least enough to know which way the decision should go). That evaluation will affect all the other references to the same expression, even if they are now scattered around in data structures and other not-yet-evaluated expressions all throughout your program.

Access the configuration parameters through a monad?

Quote from here: http://www.haskell.org/haskellwiki/Global_variables
If you have a global environment,
which various functions read from (and
you might, for example, initialise
from a configuration file) then you
should thread that as a parameter to
your functions (after having, very
likely, set it up in your 'main'
action). If the explicit parameter
passing annoys you, then you can
'hide' it with a Monad.
Now I'm writing something that needs access to configuration parameters and I wonder if someone could point me to a tutorial or any other resource that describes how monads can be used for this purpose. Sorry if this question is stupid, I'm just starting to grok monads. Reading Mike Vainer's tutorial on them now.
The basic idea is that you write code like this:
main = do
parameters <- readConfigurationParametersSomehow
forever $ do
myData <- readUserInput
putStrLn $ bigComplicatedFunction myData parameters
bigComplicatedFunction d params = someFunction params x y z
where x = function1 params d
y = function2 params x d
z = function3 params y
You read the parameters in the "main" function with an IO action, and then pass those parameters to your worker function(s) as an extra argument.
The trouble with this style is that the parameter block has to be passed down to every little function that needs to access it. This is a nuisance. You find that some function ten levels down in the call tree now needs some run-time parameter, and you have to add that run-time parameter as an argument to all the functions in between. This is known as tramp data.
The monad "solution" is to embed the run-time parameter in the Reader Monad, and make all your functions into monadic actions. This gets rid of the explicit tramp data parameter, but replaces it with a monadic type, and under the hood this monad is actually doing the data tramping for you.
The imperative world solves this problem with a global variable. In Haskell you can sort-of do the same thing like this:
parameters = unsafePerformIO readConfigurationParametersSomehow
The first time you use "parameters" the "readConfigurationParametersSomehow" gets executed, and from then on it behaves like a constant value, at least as long as your program is running. This is one of the few righteous uses for unsafePerformIO.
However if you find yourself needing such a solution then you really need to have a think about your design. Odds are you are not thinking hard enough about generalising your functions lower down; if some previously pure function suddenly needs a run-time parameter then look at the reason and see if you can exploit higher order functions in some way. For instance:
Pass down a function built using the parameter rather than the parameter itself.
Have the worker function at the bottom return a function as a result, which gets
passed up to be composed with a parameter-based function at the higher level.
Refactor your call stack so that fundamental operations are done by lower level
primitives at the bottom which are composed in a parameter-dependent way at the top.
Either way is going to involve

Ordering of parameters to make use of currying

I have twice recently refactored code in order to change the order of parameters because there was too much code where hacks like flip or \x -> foo bar x 42 were happening.
When designing a function signature what principles will help me to make the best use of currying?
For languages that support currying and partial-application easily, there is one compelling series of arguments, originally from Chris Okasaki:
Put the data structure as the last argument
Why? You can then compose operations on the data nicely. E.g. insert 1 $ insert 2 $ insert 3 $ s. This also helps for functions on state.
Standard libraries such as "containers" follow this convention.
Alternate arguments are sometimes given to put the data structure first, so it can be closed over, yielding functions on a static structure (e.g. lookup) that are a bit more concise. However, the broad consensus seems to be that this is less of a win, especially since it pushes you towards heavily parenthesized code.
Put the most varying argument last
For recursive functions, it is common to put the argument that varies the most (e.g. an accumulator) as the last argument, while the argument that varies the least (e.g. a function argument) at the start. This composes well with the data structure last style.
A summary of the Okasaki view is given in his Edison library (again, another data structure library):
Partial application: arguments more likely to be static usually appear before other arguments in order to facilitate partial application.
Collection appears last: in all cases where an operation queries a single collection or modifies an existing collection, the collection argument will appear last. This is something of a de facto standard for Haskell datastructure libraries and lends a degree of consistency to the API.
Most usual order: where an operation represents a well-known mathematical function on more than one datastructure, the arguments are chosen to match the most usual argument order for the function.
Place the arguments that you are most likely to reuse first. Function arguments are a great example of this. You are much more likely to want to map f over two different lists, than you are to want to map many different functions over the same list.
I tend to do what you did, pick some order that seems good and then refactor if it turns out that another order is better. The order depends a lot on how you are going to use the function (naturally).

Resources