How do I get Mathematica to thread a 2-variable function over two lists, using functional programming techniques? - haskell

Lets say I have a function f[x_, y_], and two lists l1, l2. I'd like to evaluate f[x,y] for each pair x,y with x in l1 and y in l2, and I'd like to do it without having to make all pairs of the form {l1[[i]],l2[[j]]}.
Essentially, what I want is something like Map[Map[f[#1, #2]&, l1],l2] where #1 takes values from l1 and #2 takes values from l2, but this doesn't work.
(Motivation: I'm trying to implement some basic Haskell programs in Mathematica. In particular, I'd like to be able to code the Haskell program
isMatroid::[[Int]]->Bool
isMatroid b =and[or[sort(union(xs\\[x])[y]'elem'b|y<-ys]|xs<-b,ys<-b, xs<-x]
I think I can do the rest of it, if I can figure out the original question, but I'd like the code to be Haskell-like. Any suggestions for implementing Haskell-like code in Mathematica would be appreciated.)

To evaluate a function f over all pairs from two lists l1 and l2, use Outer:
In[1]:= Outer[f, {a,b}, {x,y,z}]
Out[1]:= {{f[a,x],f[a,y],f[a,z]}, {f[b,x],f[b,y],f[b,z]}}
Outer by default works at the lowest level of the provided lists; you can also specify a level with an additional argument:
In[2]:= Outer[f, {{1, 2}, {3, 4}}, {{a, b}, {c, d}}, 1]
Out[2]:= {{f[{1,2},{a,b}], f[{1,2},{c,d}]}, {f[{3,4},{a,b}], f[{3,4},{c,d}]}}
Note that this produces a nested list; you can Flatten it if you like.
My original answer pointed to Thread and MapThread, which are two ways to apply a function to corresponding pairs from lists, e.g. MapThread[f,{a,b},{1,2}] == {f[a,1], f[b,2]}.
P.S. I think as you're learning these things, you'll find the documentation very helpful. There are a lot of general topic pages, for example, applying functions to lists and list manipulation. These are generally linked to in the "more about" section at the bottom of specific documentation. This makes it a lot easier to find things when you don't know what they'll be called.

To pick up on OP's request for suggestions about implementing Haskell-like code in Mathematica. A couple of things you'll have to deal with are:
Haskell evaluates lazily, by default Mathematica does not, it's very eager. You'll need to wrestle with Hold[] and its relatives to write lazily evaluating functions, but it can be done. You can also subvert Mathematica's evaluation process and tinker with Prolog and Epilog and such like.
Haskell's type system and type checking are probably more rigorous than Mathematica's defaults, but Mathematica does have the features to implement strict type checking.
I'm sure there's a lot more but I'm not terribly familiar with Haskell.

In[1]:= list1 = Range[1, 5];
In[2]:= list2 = Range[6, 10];
In[3]:= (f ## #) & /# Transpose[{list1, list2}]
Out[3]= {f[1, 6], f[2, 7], f[3, 8], f[4, 9], f[5, 10]}

Related

How would I know if Python creates a new sublist in memory for: `for item in nums[1:]`

I'm not asking for an answer to the question, but rather how I, on my own, could have gotten the answer.
Original Question:
Does the following code cause Python to make a new list of size (len(nums) - 1) in memory that then gets iterated over?
for item in nums[1:]:
# do stuff with item
Original Answer
A similarish question is asked here and there is a subcomment by Srinivas Reddy Thatiparthy saying that a new sublist is created.
But, there is no detail given about how he arrived at this answer, which I think makes it very different from what I'm looking for.
Question:
How could I have figured out on my own what the answer to my question is?
I've had similar questions before. For instance, I learned that if I do my_function(nums[1:]), I don't pass in a "slice" but rather a completely new, different sublist! I found this out by just testing whether the original list passed into my_function was modified post-function (it wasn't).
But I don't see an immediate way to figure out if Python is making a new sublist for the for loop example. Please help me to know how to do this.
side note
By the way, this is the current solution I'm using from the original stackoverflow post solutions:
for indx, item in enumerate(nums):
if indx == 0:
continue
# do stuff w items
In general, the easy way to learn if you have a new chunk of data or just a new reference to an existing chunk of data is to modify the data through one reference, and then see if it is also modified through the other. (It sounds like that's "the hard way" you did, but I would recommend it as a general technique.) Some psuedocode would look like:
function areSameRef(thing1, thing2){
thing1.modify()
return thing1.equals(thing2) //make sure this is not just a referential equality check
}
It is very rare that this will fail, and essentially requires behind-the-scenes optimizations where data isn't cloned immediately but only when modified. In this case the fact that the underlying data is the same is being hidden from you, and in most cases, you should just trust that whoever did the hiding knows what they're doing. Exceptions are if they did it wrong, or if you have some complex performance issues. For that you may need to turn to more language-specific debugging or profiling tools. (See below for more)
Do also be careful about cases where part of the data may be shared - for instance, look up cons lists and tail sharing. In those cases if you do something like:
function foo(list1, list2){
list1.append(someElement)
return list1.length == list2.length
}
will return false - the element is only added to the first list, but something like
function bar(list1, list2){
list1.set(someIndex, someElement)
return list1.get(someIndex)==list2.get(someIndex)
}
will return true (though in practice, lists created that way usually don't have an interface that allows mutability.)
I don't see a question in part 2, but yes, your conclusion looks valid to me.
EDIT: More on actual memory usage
As you pointed out, there are situations where that sort of test won't work because you don't actually have two references, as in the for i in [nums 1:] case. In that case I would say turn to a profiler, but you couldn't really trust the results.
The reason for that comes down to how compilers/interpreters work, and the contract they fulfill in the language specification. The general rule is that the interpreter is allowed to re-arrange and modify the execution of your code in any way that does not change the results, but may change the memory or time performance. So, if the state of your code and all the I/O are the same, it should not be possible for foo(5) to return 6 in one interpreter implementation/execution and 7 in another, but it is valid for them to take very different amounts of time and memory.
This matters because a lot of what interpreters and compilers do is behind-the-scenes optimizations; they will try to make your code run as fast as possible and with as small a memory footprint as possible, so long as the results are the same. However, it can only do so when it can prove that the changes will not modify the results.
This means that if you write a simple test case, the interpreter may optimize it behind the scenes to minimize the memory usage and give you one result - "no new list is created." But, if you try to trust that result in real code, the real code may be too complex for the compiler to tell if the optimization is safe, and it may fail. It can also depend upon the specific interpreter version, environmental variables or available hardware resources.
Here's an example:
def foo(x : int):
l = range(9999)
return 5
def bar(x:int):
l = range(9999)
if (x + 1 != (x*2+2)/2):
return l[x]
else:
return 5
I can't promise this for any particular language, but I would usually expect foo and bar to have much different memory usages. In foo, any moderately-well-created interpreter should be able to tell that l is never referenced before it goes out of scope, and thus can freely skip actually allocating any memory at all as a safe operation. In bar (unless I failed at arithmetic), l will never be used either - but knowing that requires some reasoning about the condition of the if statement. It takes a much smarter interpreter to recognize that, so even though these two code snippets might look the same logically, they can have very different behind-the-scenes performances.
EDIT: As has been pointed out to my, Python specifically may not be able to optimize either of these, given the dynamic nature of the language; the range function and the list type may both have been re-assigned or altered from elsewhere in the code. Without specific expertise in the python optimization world I can't say what they do or don't do. Anyway I'm leaving this here for edification on the general concept of optimizations, but take my error as a case lesson in "reasoning about optimization is hard".
All of that being said: FWIW, I strongly suspect that the python interpreter is smart enough to recognize that for i in nums[1:] should not actually allocate new memory, but just iterate over a slice. That looks to my eyes to be a relatively simple, safe and valuable transformation on a very common use case, so I would expect the (highly optimized) python interpreter to handle it.
EDIT2: As a final (opinionated) note, I'm less confident about that in Python than I am in almost any other language, because Python syntax is so flexible and allows so many strange things. This makes it much more difficult for the python interpreter (or a human, for that matter) to say anything with confidence, because the space of "legal python code" is so large. This is a big part of why I prefer much stricter languages like Rust, which force the programmer to color inside the lines but result in much more predictable behaviors.
EDIT3: As a post-final note, usually for things like this it's best to trust that the execution environment is handling these sorts of low-level optimizations. Nine times out of ten, don't try to solve this kind of performance problem until something actually breaks.
As for knowing how list slice works, from the language reference Sequence Types — list, tuple, range, we know that
s[i:j] - The slice of s from i to j is defined as the sequence of
items with index k such that i <= k < j.
So, the slice creates a new sequence but we don't know whether that sequence is a list or whether there is some clever way that the same list object somehow represents both of these sequences. That's not too surprising with the python language spec where lists are described as part of the general discussion of sequences and the spec never really tries to cover all of the details for object implementation.
That's because in the end, something like nums[1:] is really just syntactic sugar for nums.__getitem__(slice(1, None)), meaning that lists get to decide for themselves what slicing means. And you need to go to the source for the implementation. See the list_subscript function in listobject.c.
But we can experiment. Looking at the doucmentation for The for statement,
for_stmt ::= "for" target_list "in" starred_list ":" suite
["else" ":" suite]
The starred_list expression is evaluated once; it should yield an iterable object.
So, nums[1:] is an expression that must yield an iterable object and we can assign that object to an intermediate variable.
nums = [1 ,2, 3]
tmp = nums[1:]
for item in tmp:
pass
tmp[0] = "new stuff"
assert id(nums) != id(tmp), "List slice creates a new object"
assert type(tmp) == type(nums), "List slice creates a new list"
assert 999 not in nums, "List slice doesn't affect original"
Run that, and if neither assertion error is raised, you know that a new list was created.
Other sequence-like objects may work radically different. In a numpy array, for instance, two array objects may indeed reference the same memory. In this example, that final assert will be raised because the slice is another view into the same array. Yes, this can keep you up all night.
import numpy as np
nums = np.array([1,2,3])
tmp = nums[1:]
for item in tmp:
pass
tmp[0] = 999
assert id(nums) != id(tmp), "array slice creates a new object"
assert type(tmp) == type(nums), "array slice creates a new list"
assert 999 not in nums, "array slice doesn't affect original"
You can use the new Walrus operator := to capture the temporary object created by Python for the slice. A little investigation demonstrates that they aren't the same object.
import sys
print(sys.version)
a = list(range(1000))
for i in (b := a[1:]):
b[0] = 906
print(b is a)
print(a[:10])
print(b[:10])
print(sys.getsizeof(a))
print(sys.getsizeof(b))
Generates the following output:
3.11.0 (main, Nov 4 2022, 00:14:47) [GCC 7.5.0]
False
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[906, 2, 3, 4, 5, 6, 7, 8, 9, 10]
8056
8048
See for yourself on the Godbolt Compiler Explorer where you can also see the compiler generated code.

Sympy: can I get an explicit print of a builtin function?

What I'm trying to do is basically ask sympy to tell me exactly what mathematical formula is behind some of its builtin functions, since I have been struggling quite a lot with naming conventions, and I'd like to check that I'm using the function I'm supposed to.
My case would be for example with the builtin Bessel functions jn: according to Wikipedia, j0(x)=sin(x)/x, and I'm inclined to agree having solved that simple-case differential equation by hand.
However, when I ask sympy if jn(0, x)-sin(x)/x==0 the query returns False.
Running simplify does nothing but take up time, and I wouldn't know what else to do since factor and expand at most move the x to be a common denominator and evalf gives back the function in terms of yet another builtin function.
Since higher-order functions tend to be a lot more complicated than this, it would be really useful to find out which builtins correspond to what I'm looking for.
I'll be grateful for any link to resources that might help.
Using python3.10.6, sympy 1.9, running on Kubuntu 22.04
You can use the built-in help() function to read the documentation associated to a given SymPy object, or you can explore the documentation.
By reading the documentation with help(jn), we can find the description and a few examples, one of which points out to what you'd like to achieve:
print(expand_func(jn(0, x)))
# out: sin(x)/x
Edit: looking back at your question, you used: jn(0, x)-sin(x)/x==0.
With SymPy there are two kinds of equality testing:
Structural equality, performed with the == operator: you are asking SymPy to compare the expression trees. Clearly, jn(0, x)-sin(x)/x is structurally different then 0, because at this stage jn(0, x) has not been "expanded".
Mathematical equality: generally speaking, two expressions are mathematically equivalent if expr1 - expr2 is equal to 0. So, with the above example we can write: expand_func(jn(0, x)) - sin(x)/x which will results to 0. Or, we can use the equals() method: expand_func(jn(0, x)).equals(sin(x) / x) which will return True. Please, read the documentation about equals to better understand its quirks.

Can I treat int and float the same in my Cucumber Steps?

I'm trying to write tests for a Tuple class that I'm writing in Ruby (this is an exercise to learn both Ruby and Gherkin). So one of my Scenarios creates a Tuple with float values:
Scenario: A tuple with w=1.0 is a point
Given a ← tuple[4.3, -4.2, 3.1, 1.0]
Then a.x = 4.3
And ...
For the Given step, cucumber suggests the following:
Given("a ← tuple[{float}, {float}, {float}, {float}]") do |float, float2, float3, float4|
pending # Write code here that turns the phrase above into concrete actions
end
which I implemented as:
Given("a ← tuple[{float}, {float}, {float}, {float}]") do |float, float2, float3, float4|
tuple_a = Tuple.new(float, float2, float3, float4)
end
Great. Now I want another scenario which happens to pass integers to the Tuple:
Scenario: Adding two tuples
Given a ← tuple[3, -2, 5, 1]
...
And Cucumber suggests:
Given("a ← tuple[{int}, {int}, {int}, {int}]") do |int, int2, int3, int4|
pending # Write code here that turns the phrase above into concrete actions
end
But my implementation is in Ruby; I don't really care if i'm passing ints or floats to Tuple.new(). The Given step I implemented first which expects floats would work the same for ints, but Cucumber won't use that; it wants me to implement it again with int params. I could just use float arguments, e.g. Given a ← tuple[3.0, -2.0, 5.0, 1.0], but that's kind of annoying. Is my only option to define a custom ParameterType? That would entail a regexp that matches both integers and floats; will that take priority over the existing int and float types?
I would suggest using unit test tools for this sort of thing e.g. rspec, minitest etc. they are a much better fit.
Scenarios are useful when you can express things in a language that is not technical and abstract. Your scenarios are technical and concrete and much harder to read and write.
An analogy is trying to write mathematical expressions in a natural langauge.
(3+5)^3 is much simpler and more precise than, add 5 to 3 and then cube the total
The art of learning Gherkin is how to write simple clear scenarios that describe a particular behaviour. It is not about using multiple params, complex regex's, large tables and multiple examples. You are learning the wrong things if you want to learn how to Cuke and do BDD. You are using the wrong tool if you want to learn ruby and write tests for things like a Tuple class.

Is functional Clojure or imperative Groovy more readable?

OK, no cheating now.
No, really, take a minute or two and try this out.
What does "positions" do?
Edit: simplified according to cgrand's suggestion.
(defn redux [[current next] flag] [(if flag current next) (inc next)])
(defn positions [coll]
(map first (reductions redux [1 2] (map = coll (rest coll)))))
Now, how about this version?
def positions(coll) {
def (current, next) = [1, 1]
def previous = coll[0]
coll.collect {
current = (it == previous) ? current : next
next++
previous = it
current
}
}
I'm learning Clojure and I'm loving it, because I've always enjoyed functional programming. It took me longer to come up with the Clojure solution, but I enjoyed having to think of an elegant solution. The Groovy solution is alright, but I'm at the point where I find this type of imperative programming boring and mechanical. After 12 years of Java, I feel in a rut and functional programming with Clojure is the boost I needed.
Right, get to the point. Well, I have to be honest and say that I wonder if I'll understand the Clojure code when I go back to it months later. Sure I could comment the heck out of it, but I don't need to comment my Java code to understand it.
So my question is: is it a question of getting more used to functional programming patterns? Are functional programming gurus reading this code and finding it a breeze to understand? Which version did you find easier to understand?
Edit: what this code does is calculate the positions of players according to their points, while keep track of those who are tied. For example:
Pos Points
1. 36
1. 36
1. 36
4. 34
5. 32
5. 32
5. 32
8. 30
I don't think there's any such thing as intrinsic readability. There's what you're used to, and what you aren't used to. I was able to read both versions of your code OK. I could actually read your Groovy version more easily, even though I don't know Groovy, because I too spent a decade looking at C and Java and only a year looking at Clojure. That doesn't say anything about the languages, it only says something about me.
Similarly I can read English more easily than Spanish, but that doesn't say anything about the intrinsic readability of those languages either. (Spanish is actually probably the "more readable" language of the two in terms of simplicity and consistency, but I still can't read it). I'm learning Japanese right now and having a heck of a hard time, but native Japanese speakers say the same about English.
If you spent most of your life reading Java, of course things that look like Java will be easier to read than things that don't. Until you've spent as much time looking at Lispy languages as looking at C-like languages, this will probably remain true.
To understand a language, among other things you have to be familiar with:
syntax ([vector] vs. (list), hyphens-in-names)
vocabulary (what does reductions mean? How/where can you look it up?)
evaluation rules (does treating functions as objects work? It's an error in most languages.)
idioms, like (map first (some set of reductions with extra accumulated values))
All of these take time and practice and repetition to learn and internalize. But if you spend the next 6 months reading and writing lots of Clojure, not only will you be able to understand that Clojure code 6 months from now, you'll probably understand it better than you do now, and maybe even be able to simplify it. How about this:
(use 'clojure.contrib.seq-utils) ;;'
(defn positions [coll]
(mapcat #(repeat (count %) (inc (ffirst %)))
(partition-by second (indexed coll))))
Looking at Clojure code I wrote a year ago, I'm horrified at how bad it is, but I can read it OK. (Not saying your Clojure code is horrible; I had no trouble reading it at all, and I'm no guru.)
I agree with Timothy: you introduce too much abstractions. I reworked your code and ended with:
(defn positions [coll]
(reductions (fn [[_ prev-score :as prev] [_ score :as curr]]
(if (= prev-score score) prev curr))
(map vector (iterate inc 1) coll)))
About your code,
(defn use-prev [[a b]] (= a b))
(defn pairs [coll] (partition 2 1 coll))
(map use-prev (pairs coll))
can be simply refactored as:
(map = coll (rest coll))
edit: may not be relevant anymore.
The Clojure one is convoluted to me. It contains more abstractions which need to be understood. This is the price of using higher order functions, you have to know what they mean. So in an isolated case, imperative requires less knowledge. But the power of abstractions is in their means of combination. Every imperative loop must be read and understood, whereas sequence abstractions allow you to remove the complexity of a loop and combine powerful opperations.
I would further argue that the Groovy version is at least partially functional as it uses collect, which is really map, a higher order function. It has some state in it also.
Here is how I would write the Clojure version:
(defn positions2 [coll]
(let [current (atom 1)
if-same #(if (= %1 %2) #current (reset! current (inc %3)))]
(map if-same (cons (first coll) coll) coll (range (count coll)))))
This is quite similar to the Groovy version in that it uses a mutable "current", but differs in that it doesn't have a next/prev variable - instead using immutable sequences for those. As Brian elloquently put it, readability is not intrinsic. This version is my preference for this particular case, and seems to sit somewhere in the middle.
The Clojure one is more convoluted at first glance; though it maybe more elegant.
OO is the result to make language more "relatable" at higher-level.
Functional languages seems to have a more "algorithimc"(primitive/elementary) feel to it.
That's just what I felt at the moment.
Maybe that will change when I have more experience working with clojure.
I'm afraid that we are decending into the game of which language can be the most concise or solve a problem in the least line of code.
The issue are 2 folds for me:
How easy at first glance to get a feel of what the code is doing?.
This is important for code maintainers.
How easy is it to guess at the logic behind the code?.
Too verbose/long-winded?. Too terse?
"Make everything as simple as possible, but not simpler."
Albert Einstein
I too am learning Clojure and loving it. But at this stage of my development, the Groovy version was easier to understand. What I like about Clojure though is reading the code and having the "Aha!" experience when you finally "get" what is going on. What I really enjoy is the similar experience that happens a few minutes later when you realize all of the ways the code could be applied to other types of data with no changes to the code. I've lost count of the number of times I've worked through some numerical code in Clojure and then, a little while later, thought of how that same code could be used with strings, symbols, widgets, ...
The analogy I use is about learning colors. Remember when you were introduced to the color red? You understood it pretty quickly -- there's all this red stuff in the world. Then you heard the term magenta and were lost for a while. But again, after a little more exposure, you understood the concept and had a much more specific way to describe a particular color. You have to internalize the concept, hold a bit more information in your head, but you end up with something more powerful and concise.
Groovy supports various styles of solving this problem too:
coll.groupBy{it}.inject([]){ c, n -> c + [c.size() + 1] * n.value.size() }
definitely not refactored to be pretty but not too hard to understand.
I know this is not an answer to the question, but I will be able "understand" the code much better if there are tests, such as:
assert positions([1]) == [1]
assert positions([2, 1]) == [1, 2]
assert positions([2, 2, 1]) == [1, 1, 3]
assert positions([3, 2, 1]) == [1, 2, 3]
assert positions([2, 2, 2, 1]) == [1, 1, 1, 4]
That will tell me, one year from now, what the code is expected to do. Much better than any excellent version of the code I have seen here.
Am I really off topic?
The other thing is, I think "readability" depends on the context. It depends who will maintain the code. For example, in order to maintain the "functional" version of the Groovy code (however brilliant), it will take not only Groovy programmers, but functional Groovy programmers...
The other, more relevant, example is: if a few lines of code make it easier to understand for "beginner" Clojure programmers, then the code will overall be more readable because it will be understood by a larger community: no need to have studied Clojure for three years to be able to grasp the code and make edits to it.

Explanation of “tying the knot”

In reading Haskell-related stuff I sometimes come across the expression “tying the knot”, I think I understand what it does, but not how.
So, are there any good, basic, and simple to understand explanations of this concept?
Tying the knot is a solution to the problem of circular data structures. In imperative languages you construct a circular structure by first creating a non-circular structure, and then going back and fixing up the pointers to add the circularity.
Say you wanted a two-element circular list with the elements "0" and "1". It would seem impossible to construct because if you create the "1" node and then create the "0" node to point at it, you cannot then go back and fix up the "1" node to point back at the "0" node. So you have a chicken-and-egg situation where both nodes need to exist before either can be created.
Here is how you do it in Haskell. Consider the following value:
alternates = x where
x = 0 : y
y = 1 : x
In a non-lazy language this will be an infinite loop because of the unterminated recursion. But in Haskell lazy evaluation does the Right Thing: it generates a two-element circular list.
To see how it works in practice, think about what happens at run-time. The usual "thunk" implementation of lazy evaluation represents an unevaluated expression as a data structure containing a function pointer plus the arguments to be passed to the function. When this is evaluated the thunk is replaced by the actual value so that future references don't have to call the function again.
When you take the first element of the list 'x' is evaluated down to a value (0, &y), where the "&y" bit is a pointer to the value of 'y'. Since 'y' has not been evaluated this is currently a thunk. When you take the second element of the list the computer follows the link from x to this thunk and evaluates it. It evaluates to (1, &x), or in other words a pointer back to the original 'x' value. So you now have a circular list sitting in memory. The programmer doesn't need to fix up the back-pointers because the lazy evaluation mechanism does it for you.
It's not quite what you asked for, and it's not directly related to Haskell, but Bruce McAdam's paper That About Wraps It Up goes into this topic in substantial breadth and depth. Bruce's basic idea is to use an explicit knot-tying operator called WRAP instead of the implicit knot-tying that is done automatically in Haskell, OCaml, and some other languages. The paper has lots of entertaining examples, and if you are interested in knot-tying I think you will come away with a much better feel for the process.

Resources