Haskell: binding to fast and simple SAT solver - haskell

Today I wanted too look into options on SAT solving in haskell. First I tought about writing my own interface to the picosat solver.
Then I found out there is the SBV library.
It's interfaces to Z3, Yices, CVC4 and Boolector.
Also, I did a google search on github and it turs out there is even Picosat binding availiable.
Are there any other haskell bindings to SAT solvers that are worth looking at given the constraint of fast/high performance. Carification: that are as suitable for high performance SAT-solving (e.g., problems that run for days, as well as problems that need to finish as fast as possible as I check 2^20 or more SAT problems). For example, what I am particularly missing on hackage is a binding to a fast parrallel SAT solver like Plingeling. (Also, I found out about the current updated picosat binding on github more by accident and I very well might miss other options)
The default option of the SBV library is the Z3 SMT solver. Am I right in my educated guess that picosat is faster for plain SAT-solving than Z3?

Disclosure, I'm the author of the Haskell picosat bindings you mentioned.
SBV is really robust library that's been around for a while, it's good if you want an interface to external SMT or SAT solvers like Yices or Z3. Picosat is a much simpler library that I wrote simply because I wanted a library that could be installed simply without external dependencies.
Am I right in my educated guess that picosat is faster for plain SAT-solving than Z3?
I don't know what your performance constraints are, but as far as underlying solver libraries go you're not going to notice a significant difference between Z3 or Picosat until you hit really enormous problems ( billions of variables ). Both are very heavily optimized libraries and the bottleneck ( at least from the Haskell side ) is likely going to be marshalling data between the library and Haskell's runtime.

SBV is thread-safe.
Comparing Z3 and Lingeling for SAT performance is not an easy task. I'd hazard a guess that they would be more or less the same unless you take your time to figure out the exact parameters to fine tune their internal heuristics.
The good thing is that SBV provides a common interface, so you can change the solver by merely importing a different bridge:
import Data.SBV.Bridge.Z3
vs
import Data.SBV.Bridge.Boolector
and if you compile boolector to use lingeling, then you can test performance easily by merely changing one line of Haskell.

Related

Best modelling language for modelling LP/MILP? (NOT solver)

I have a Gurobi licence and I am after a good MILP/LP modelling language, which should be
free/open source
intuitive, i.e. something that looks like (taken from MiniZinc)
var int: x;
constraint x >= 0.5;
solve minimize x;
fast: the time to build the model and send it to Gurobi should be of similar order to the best ones (AMPL GAMS etc.)
flexible/powerful (ability to deal with 3D+ arrays, activate/deactivate constraints easily, provide initial solutions to the solver, etc.)
Of course, and correct me if I'm wrong, AMPL GAMS fail at 1), Python and R fail at 2) (and perhaps at 3)?).
How about GLPK, Minizinc, ZIMPL etc.? They satisfy 1) and 2) but what about 3) and 4)? Are they as good as AMPL in this regard? If not, is there a modelling language satisfying 1-4?
I've used AMPL with Gurobi for mid-sized MIPs (~ 100k-1m variables?) and MiniZinc, mostly with Gecode, for smaller combinatorial problems. I've seen some Gurobi work done with R and Python, but haven't used it that way myself.
I'm less familiar with the other options. My understanding is that GAMS is quite similar to AMPL and much of what I have to say about AMPL may also be valid for GAMS, but I can't vouch for it.
Of course, and correct me if I'm wrong, AMPL GAMS fail at 1),
Yes, generally. There is an exception which probably isn't helpful for your specific requirements but might be useful to others: you can get free use of AMPL, Gurobi, and many other optimisation products, by using the NEOS web service. This is restricted to academic non-commercial purposes and you have to grant NEOS certain rights in relation to the problems you send them; definitely read those terms of service before using it. It also requires waiting for an available server, so if speed is a high priority this probably isn't the solution for you.
Python and R fail at 2) (and perhaps at 3)?).
In my limited experience, yes for (2). AMPL, GAMS, and MiniZinc are designed specifically for defining optimisation problems, so it's unsurprising that their syntax is more user-friendly for that purpose than languages like Python and R.
The flip-side to this is that if you want to do just about anything other than defining an optimisation problem with these languages, Python/R/etc. will probably be better for that purpose.
On speed: for the problems I usually work with, AMPL takes maybe a couple of seconds to build and presolve a MIP model which takes Gurobi a couple of minutes to solve. Obviously this is going to vary somewhat with hardware and details of the problem, but in general I would expect build time to be small compared to solve time for any of the solutions under discussion. Even with a good solver like Gurobi, big MIPs are hard. Many of the serious optimisation programmers I've met do use Python, so I presume the performance side is good enough.
However, that doesn't mean the choice of language/platform is irrelevant to speed. One of the nice features of AMPL (and also GAMS) is presolve, which attempts to reduce the problem size before sending it to the solver. My standard problems have a lot of redundant variables and constraints; AMPL identifies and eliminates many of these, reducing the problem size by about 80% and giving a noticeable improvement in solver time (as compared to runs where I switch off presolve, which I sometimes do for debugging-related reasons). This might be a consideration if you expect a lot of redundancy.
flexible/powerful (ability to deal with 3D+ arrays, activate/deactivate constraints easily, provide initial solutions to the solver, etc.)
MiniZinc handles up to 6D arrays, which may or may not be enough depending on your applications.
It's more flexible than AMPL in some areas and less so in others. AMPL has a lot of set-based functionality that I find useful (e.g. I can define a variable whose index set is something like "pairs of non-identical cities separated by no more than 500 km") and MiniZinc doesn't have this. OTOH, MiniZinc seems to be better than AMPL for solver-hopping, e.g. if I write a MZ model with a combinatorial constraint like "alldifferent" but then try to run it on a solver that doesn't recognise such constraints, MZ will translate it into something the solver can deal with.
I haven't tried deactivating constraints in MZ other than by commenting them out, so I can't help there, and similarly on providing initial solutions.
Overall, MiniZinc is a good choice to consider. Some pluses and minuses relative to AMPL ("free" being a big plus!) but it fills a similar niche.
IMHO, there is no such system if you consider the Python interfaces/modeling environments to SCIP or Gurobi too complicated:
x = model.addVar()
y = model.addVar(vtype="INTEGER")
model.setObjective(x + y)
model.addCons(2*x - y*y >= 0)
model.optimize()
To me this looks quite natural and straight forward. The immense benefit of using an actual programming language instead of modeling language is that you can do anything in there, while there will always be boundaries in the latter.
If you are a looking for a modeling GUI, you should check out LITIC. It can be used almost entirely with drag-and-drop operations: https://litic.com/showcase.html
I've used a lot of the options mentioned, and some not yet mentioned
GAMS
GAMS' Python API
GAMS' MATLAB API
AMPL
FICO Xpress Mosel
FICO Xpress Model's Python API
IBM ILOG OPL
Gurobi's Python API
PuLP (Python)
Pyomo (Python)
Python-MIP
JuMP (Julia)
MATLAB Optimization Toolbox
Google OR-Tools
Based on your requirements, I'd suggest trying Python-MIP, PuLP or JuMP. They are free and have easy syntax with no limit on array dimensionality.
Take a look at Google or-tools. I’m not sure if getting initial solution to the solver is available in all of its interfaces, but if you use it in python, it should probably satisfy all 1-4.

Expression trees vs IL.Emit for runtime code specialization

I recently learned that it is possible to generate C# code at runtime and I would like to put this feature to use. I have code that does some very basic geometric calculations like computing line-plane intersections and I think I could gain some performance benefits by generating specialized code for some of the methods because many of the calculations are performed for the same plane or the same line over and over again. By specializing the code that computes the intersections I think I should be able to gain some performance benefits.
The problem is that I'm not sure where to begin. From reading a few blog posts and browsing MSDN documentation I've come across two possible strategies for generating code at runtime: Expression trees and IL.Emit. Using expression trees seems much easier because there is no need to learn anything about OpCodes and various other MSIL related intricacies but I'm not sure if expression trees are as fast as manually generated MSIL. So are there any suggestions on which method I should go with?
The performance of both is generally same, as expression trees internally are traversed and emitted as IL using the same underlying system functions that you would be using yourself. It is theoretically possible to emit a more efficient IL using low-level functions, but I doubt that there would be any practically important performance gain. That would depend on the task, but I have not come of any practical optimisation of emitted IL, compared to one emitted by expression trees.
I highly suggest getting the tool called ILSpy that reverse-compiles CLR assemblies. With that you can look at the code actually traversing the expression trees and actually emitting IL.
Finally, a caveat. I have used expression trees in a language parser, where function calls are bound to grammar rules that are compiled from a file at runtime. Compiled is a key here. For many problems I came across, when what you want to achieve is known at compile time, then you would not gain much performance by runtime code generation. Some CLR JIT optimizations might be also unavailable to dynamic code. This is only an opinion from my practice, and your domain would be different, but if performance is critical, I would rather look at native code, highly optimized libraries. Some of the work I have done would be snail slow if not using LAPACK/MKL. But that is only a piece of the advice not asked for, so take it with a grain of salt.
If I were in your situation, I would try alternatives from high level to low level, in increasing "needed time & effort" and decreasing reusability order, and I would stop as soon as the performance is good enough for the time being, i.e.:
first, I'd check to see if Math.NET, LAPACK or some similar numeric library already has similar functionality, or I can adapt/extend the code to my needs;
second, I'd try Expression Trees;
third, I'd check Roslyn Project (even though it is in prerelease version);
fourth, I'd think about writing common routines with unsafe C code;
[fifth, I'd think about quitting and starting a new career in a different profession :) ],
and only if none of these work out, would I be so hopeless to try emitting IL at run time.
But perhaps I'm biased against low level approaches; your expertise, experience and point of view might be different.

Machine learning in OCaml or Haskell?

I'm hoping to use either Haskell or OCaml on a new project because R is too slow. I need to be able to use support vectory machines, ideally separating out each execution to run in parallel. I want to use a functional language and I have the feeling that these two are the best so far as performance and elegance are concerned (I like Clojure, but it wasn't as fast in a short test). I am leaning towards OCaml because there appears to be more support for integration with other languages so it could be a better fit in the long run (e.g. OCaml-R).
Does anyone know of a good tutorial for this kind of analysis, or a code example, in either Haskell or OCaml?
Hal Daume has written several major machine learning algorithms during his Ph.D. (now he is an assistant professor and rising star in machine learning community)
On his web page, there are a SVM, a simple decision tree and a logistic regression all in OCaml. By reading these code, you can have a feeling how machine learning models are implemented in OCaml.
Another good example of writing basic machine learning models is Owl library for scientific and numeric computations in OCaml.
I'd also like to mention F#, a new .Net language similar to OCaml. Here's a factor graph model written in F# analyzing Chess play data. This research also has a NIPS publication.
While FP is suitable for implementing machine learning and data mining models. But what you can get here most is NOT performance. It is right that FP supports parallel computing better than imperative languages, like C# or Java. But implementing a parallel SVM, or decision tree, has very little relation to do with the language! Parallel is parallel. The numerical optimizations behind machine learning and data mining are usually imperative, writing them pure-functionally is usually hard and less efficient. Making these sophisticated algorithms parallel is very hard task in the algorithm level, not in the language level. If you want to run 100 SVM in parallel, FP helps here. But I don't see the difficulty running 100 libsvm parallel in C++, not to consider that the single thread libsvm is more efficient than a not-well-tested haskell svm package.
Then what do FP languages, like F#, OCaml, Haskell, give?
Easy to test your code. FP languages usually have a top-level interpreter, you can test your functions on the fly.
Few mutable states. This means that passing the same parameter to a function, this function always gives the same result, thus debugging is easy in FPs.
Code is succinct. Type inference, pattern matching, closures, etc. You focus more on the domain logic, and less on the language part. So when you write the code, your mind is mainly thinking about the programming logic itself.
Writing code in FPs is fun.
The only problem I can see is that OCaml doesn't really support multicore parallelism, while GHC has excellent support and performance. If you're looking to use multiple threads of execution, on multiple calls, GHC Haskell will be a lot easier.
Secondly, the Haskell FFI is more powerful (that is, it does more with less code) than OCaml's, and more libraries are avaliable (via Hackage: http://hackage.haskell.org ) so I don't think foreign interfaces will be a deciding factor.
As far as multi-language integration goes, combining C and Haskell is remarkably easy, and I say this as someone who is (unlike dons) not really much of an expert on either. Any other language that integrates well with C shouldn't be much trickier; you can always fall back to a thin interface layer in C if nothing else. For better or worse, C is still the lingua franca of programming, so Haskell is more than acceptable for most cases.
...but. You say you're motivated by performance issues, and want to use "a functional language". From this I infer you're not previously familiar with the languages you ask about. Among Haskell's defining features are that it, by default, uses non-strict evaluation and immutable data structures--which are both incredibly useful in many ways, but it also means that optimizing Haskell for performance is often dramatically different from other languages, and well-honed instincts may lead you astray in baffling ways. You may want to browse performance-related topics on the Haskell wiki to get a feel for the issues.
Which isn't to say that you can't do what you want in Haskell--you certainly can. Both laziness and immutability can in fact be exploited for performance benefits (Chris Okasaki's thesis provides some nice examples). But be aware that there'll be a bit of a learning curve when it comes to dealing with performance.
Both Haskell and OCaml provide the lovely benefits of using an ML-family language, but for most programmers, OCaml is likely to offer a gentler learning curve and better immediate results.
It's hard to give a definitive answer on this. Haskell has the advantages that Don mentioned along with having a more powerful type system and cleaner syntax. OCaml will be easier to learn if you coming from almost any other language (this is because Haskell is as function as functional languages get), and working with mutable random access structures can be a little clunky in Haskell. You will also likely find the performance characteristics of your OCaml code more intuitive than Haskell because of Haskell's lazy evaluation.
Really, I would recommend you evaluate both if you have the time. Here are some relevant Haskell resources:
http://hackage.haskell.org/package/hslibsvm
http://hackage.haskell.org/package/HSvm
Real World Haskell: this is a great freely available book for Haskell
Learn You a Haskell: this tutorial is just plain fun to read
Oh, if you look further into Haskell be sure to sign up for the Haskell Beginners and Haskell Cafe lists. The community is friendly and eager to help out newcomers (is my bias showing?).
If speed is your prime concern then go for C. Haskell is pretty good performance wise but you are never going to get as fast as C. To my knowledge the only functional language that has bettered C in a benchmark is Stalin Scheme but that is very old and nobody really knows how it works.
I've written genetic programming libraries where performance was key and I wrote it in a functional style in C. The functional style allowed me to easily parallelise it using OMP and it scales linearly upto 8 cores within a single process. You certainly can't do that in OCaml although Haskell is improving all the time with regards to concurrency and parallelism.
The downside of using C was that it took me months to finally find all the bugs and stop the core dumps which was extremely challenging because of the concurrency. Haskell would probably have caught 90% of those bugs on the first compilation.
So speed at any cost ? Looking back I'd wish I'd used Haskell as I could stand it to be 2 - 3 times slower if I'd saved over a month in development time.
While dons is correct that multicore parallelism at the thread level is better supported in Haskell, it sounds like you could live with process level parallelism (from your phrase: ideally separating out each execution to run in parallel.) which is supported quite well in OCaml. Keith pointed out that Haskell has a more powerful type system, but it can also be said that OCaml has a more powerful module system than Haskell.
As others have pointed out, OCaml's learning curve will be lower than Haskell's; you'll likely be more productive more quickly in OCaml. That said, learning OCaml is a great stepping-stone towards learning Haskell because many of the underlying concepts are very similar, so you could always migrate to Haskell later and find a lot of things familiar there. And as you pointed out, there is an OCaml-R bridge.
As an examples of Haskell and Ocaml in machine learning see stuff at Hal Daume and Lloyd Allison homepages. IMO it's is much more straightforward to achieve C++-like performance in Ocaml, than in Haskell. Through, as already said, Haskell has much nicer community (packages, tools and support), syntax&features (i.e. FFI, probability monads via typeclasses) and parallel programming support.
Having revamped OCaml-R, I've got a few comments to make on integrating OCaml and R. It might be worthwile to use OCaml to call R code, it works, but is not yet exactly straightforward. So using it to pilot R is worthwile. Integrating R functionality much more thoroughly is still cumbersome as, for example, much remains to be done to export R's type system and data to OCaml in a seamless way (you will have work to do). Moreover, the interaction of R's GC and OCaml's GC is a delicate point: you free n values in O(n^2) time, which isn't nice (to solve this point, you either need a more flexible R API, as far as I understand it, or to implement a GC in the binding itself as a big R array for proper interaction between GCs).
In a nutshell, I'd go for the "pilot R from OCaml" approach.
Contributions on the GC interaction layer and on mapping R datatypes to OCaml are most welcome.
You may want to take a look at this : http://www.haskell.org/pipermail/haskell-cafe/2010-May/077243.html
Late answer but a machine learning library in Haskell is available here : https://github.com/mikeizbicki/HLearn
This library implements various ML algorithms who are designed to have a much faster cross-validation than the usual implementations. It is based on the following paper Algebraic classifiers: a generic approach to fast cross-validation,
online training, and parallel training. The authors claims a 400x speed-up compared to the same task in Weka.
for haskell, consider checking hasktorch (which I managed to use for my AI thesis). for ocaml there seem to be tensorflow bindings.

Looking for a functional language [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm a scientist working mostly with C++, but I would like to find a better language. I'm looking for suggestions, I'm not even sure my "dream language" exist (yet), but here's my wishlist;
IMPORTANT FEATURES (in order of importance)
1.1: Performance: For science, performance is very important. I perfectly understand the importance of productivity, not just execution speed, but when your program has to run for hours, you just can't afford to write it in Python or Ruby. It doesn't need to be as fast as C++, but it has to be reasonably close (e.g.: Fortran, Java, C#, OCaml...).
1.2: High-level and elegant: I would like to be able to concentrate as most as possible on the science and get a clear code. I also dislike verbose languages like Java.
1.3: Primarely functional: I like functional programming, and I think it suits both my style and scientific programming very well. I don't care if the language supports imperative programming, it might be a plus, but it has to focus and encourage functional programming.
1.4: Portability: Should work well on Linux (especially Linux!), Mac and Windows. And no, I do not think F# works well on Linux with mono, and I'm not sure OCaml works well on windows ;)
1.5: Object-oriented, preferably under the "everything is an object" philosophy: I realized how much I liked object-oriented programming when I had to deal pure C not so long ago. I like languages with a strong commitment to object-oriented programming, not just timid support.
NOT REALLY IMPORTANT, BUT THINGS THAT WOULD BE NICE
2.1: "Not-too-strong" typing: I find Haskell's strong typing system to be annoying, I like to be able to do some implicit casting.
2.2: Tools: Good tools is always a plus, but I guess it really depends on the languages. I played with Haskell using Geany, a lightweight editor, and I never felt handicapped. On the other hand I wouldn't have done the same with Java or even Scala (Scala, in particular, seems to be lacking good tools, which is really a shame). Java is really the #1 language here, with NetBeans and Javadoc, programming with Java is easy and fun.
2.3: Garbage collected, but translated or compiled without a virtual machine. I have nothing against virtual machines, but the two giants in the domain have their problems. On paper the .net framework seems much better, and especially suited for functional programming, but in practice it's still very windows-centric and the support for Linux/MacOS is terrible not as good as it should be, so it's not really worth considering. Java is now a mature VM, but it annoys me on some levels: I dislike the ways it deals with executables, generics, and it writes terrible GUIs (although these things aren't so bad).
In my mind there are three viable candidates: Haskell, Standard ML, OCaml. (Scala is out on the grounds that it compiles to JVM codes and is therefore unlikely to be fast enough when programs must run for days.)
All are primarily functional. I will comment where I have knowledge.
Performant
OCaml gives the most stable performance for all situations, but performance is hard to improve. What you get is what you get :-)
Haskell has the best parallel performance and can get excellent use out of an 8-core or 16-core machine. If your future is parallel, I urge you to master your dislike of the type system and learn to use Haskell effectively, including the Data Parallel Haskell extensions.
The down side of Haskell performance is that it can be quite difficult to predict the space and time required to evaluate a lazy functional program. There are excellent profiling tools, but still significant effort may be required.
Standard ML with the MLton compiler gives excellent performance. MLton is a whole-program compiler and does a very good job.
High-level and elegant
Syntactically Haskell is the clear winner. The type system, however, is cluttered with the remains of recent experiments. The core of the type system is, however, high-level and elegant. The "type class" mechanism is particularly powerful.
Standard ML has ugly syntax but a very clean type system and semantics.
OCaml is the least elegant, both from a point of view of syntax and from the type system. The remains of past experiments are more obtrusive than in Haskell. Also, the standard libraries do not support functional programming as well as you might expect.
Primarily functional
Haskell is purely functional; Standard ML is very functional; OCaml is mostly functional (but watch out for mutable strings and for some surprising omissions in the libraries; for example, the list functions are not safe for long lists).
Portability
All three work very well on Linux. The Haskell developers use Windows and it is well supported (though it causes them agony). I know OCaml runs well on OSX because I use an app written in OCaml that has been ported to OSX. But I'm poorly informed here.
Object-oriented
Not to be found in Haskell or SML. OCaml has a bog-standard OO system grafted onto the core language, not well integrated with other languages.
You don't say why you are keen for object-orientation. ML functors and Haskell type classes provide some of the encapsulation and polymorphism (aka "generic programming") that are found in C++.
Type system than can be subverted
All three languages provide unsafe casts. In all three cases they are a good way to get core dumps.
I like to be able to do some implicit casting.
I think you will find Haskell's type-class system to your liking—you can get some effects that are similar to implicit casting, but safely. In particular, numeric and string literals are implicitly castable to any type you like.
Tools
There are pretty good profiling tools with Haskell. Standard ML has crappy tools. OCaml has basically standard Unix profiling plus an unusable debugger. (The debugger refuses to cross abstraction barriers, and it doesn't work on native code.)
My information may be out of date; the tools picture is changing all the time.
Garbage-collected and compiled to native code
Check. Nothing to choose from there.
Recommendation
Overcome your aversion to safe, secure type systems. Study Haskell's type classes (the original paper by Wadler and Blott and a tutorial by Mark Jones may be illuminating). Get deeper into Haskell, and be sure to learn about the huge collection of related software at Hackage.
Try Scala. It's an object-oriented functional language that runs in the JVM, so you can access everything that was ever written in Java. It has all your important features, and one of the nice to have features. (Obviously not #2.2 :) but that will probably get better quickly.) It does have very strong typing, but with type inference it doesn't really get in your way.
You just described Common Lisp...
If you like using lists for most things, and care about performance, use Haskell or Ocaml. Although Ocaml suffers significantly in that Floats on the heap need to be boxed due to the VM design (but arrays of floats and purely-float records aren't individually boxed, which is good).
If you're willing to use arrays more than lists, or plan on programming using mutable state, use Scala rather than Haskell. If you're looking to write threaded multi-core code, use Scala or Haskell (Ocaml requires you to fork).
Scala's list is polymorphic, so a list of ints is really a list of boxed Int objects. Of course you could write your own list of ints in Scala that would be as fast, but I assume you'd rather use the standard libraries. Scala does have as much tail recursion as is possible on JVM.
Ocaml fails on Vista 64 for me, I think because they just changed the linker in the latest version (3.11.1?), but earlier versions worked fine.
Scala tool support is buggy at the moment if you're using nightly builds, but should be good soon. There are eclipse and netbeans plugins. I'm using emacs instead. I've used both the eclipse and netbeans debugger GUI successfully in the past.
None of Scala, Ocaml, or Haskell, have truly great standard libraries, but at least you can easily use Java libs in Scala. If you use mapreduce, Scala wins on integration. Haskell and Ocaml have a reasonable amount of 3rd party libs. It annoys me that there are differently named combinators for 2-3 types of monad in Haskell.
http://metamatix.org/~ocaml/price-of-abstraction.html might convince you to stay with C++. It's possible to write Scala that's almost identical in performance to Java/C++, but not necessarily in a high level functional or OO style.
http://gcc.gnu.org/projects/cxx0x.html seems to suggest that auto x = ... (type inference for expressions) and lambdas are usable. C++0x with boost, if you can stomach it, seems pretty functional. The downside to C++ high performance template abusing libraries is, of course, compile time.
Your requirements seem to me to describe ocaml quite well, except for the "not-too-strong" typing. As for tools, I use and like tuareg mode for emacs. Ocaml should run on windows (I haven't used it myself though), and is pretty similar to F#, FWIW.
I'd consider the ecosystem around the language as well. In my opinion Ocaml's major drawback is that it doesn't have a huge community, and consequently lacks the large library of third-party modules that are part of what makes python so convenient. Having to write your own code or modify someone else's one-shot prototype module you found on the internet can eat up some of the time you save by writing in a nice functional language.
You can use F# on mono; perhaps worth a look? I know that mono isn't 100% perfect (nothing ever is), but it is very far from "terrible"; most of the gaps are in things like WCF/WPF, which I doubt you'd want to use from FP. This would seem to offer much of what you want (except obviously it runs in a VM - but you gain a huge set of available libraries in the bargain (i.e. most of .NET) - much more easily than OCaml which it is based on).
I would still go for Python but using NumPy or some other external module for the number crunching or alternatively do the logic in Python and the hotspots in C / assembler.
You are always giving up cycles for comfort, the more comfort the more cycles. Thus you requirements are mutual exclusive.
I think that Common Lisp fits your description quite well.
1.1: Performance: Modern CL implementations are almost on par with C. There are also foreign function interfaces to interact with C libraries, and many bindings are already done (e.g. the GNU Scientific Library).
1.2: High-level and elegant: Yep.
1.3: Primarily functional: Yes, but you can also "get imperative" wherever the need arises; CL is "multi-paradigm".
1.4: Portability: There are several implementations with differing support for each platform. Some links are at CLiki and ALU Wiki.
1.5: Object-oriented, preferably under the "everything is an object" philosophy: CLOS, the Common Lisp Object System, is much closer to being "object oriented" than any of the "curly" languages, and also has features you will sorely miss elsewhere, like multimethods.
2.1: "Not-too-strong" typing: CL has dynamic, strong typing, which seems to be what you want.
2.2: Tools: Emacs + SLIME (the Superior Lisp Interaction Mode for Emacs) is a very nice free IDE. There is also a plugin for Eclipse (Cusp), and the commercial CL implementations also oftem bring an own IDE.
2.3: Garbage collected, but translated or compiled without a virtual machine. The Lisp image that you will be working on is a kind of VM, but I think that's not what you mean.
A further advantage is the incremental development: you have a REPL (read-eval-print-loop) running that provides a live interface into the running image. You can compile and recompile individual functions on the fly, and inspect the current program state on the live system. You have no forced interruptions due to compiling.
Short Version: The D Programming Language
Yum Yum Yum, that is a big set of requirements.
As you probably know, object orientation, high-level semantics, performance, portability and all the rest of your requirements don't tend to fit together from a technical point of view. Let's split this into a different view:
Syntax Requirements
Object Orientated presentation
Low memory management complexity
Allows function style
Isn't Haskell (damn)
Backend Requirements
Fast for science
Garbage Collected
On this basis I would recommend The D programming language it is a successor to C trying to be all things to all people.
This article on D is about it's functional programming aspects. It is object-orientated, garbage collected and compiles to machine code so is fast!
Good Luck
Clojure and/or Scala are good canditates for JVM
I'm going to assume that you are familiar enough with the languages you mentioned to have ruled them out as possibilities. Given that, I don't think there is a language that fulfills all your expectations. However, there are still a few languages you could take a look at:
Clojure This really is a very nice language. It's syntax is based on LISP, and it runs on the JVM.
D This is like C++ done right. It has all the features you want except that it's kind of weak on the functional programming.
Clean This is based very heavily on Haskell, but removes some of Haskell's problems. Downsides are that it's not very mature and doesn't have a lot of libraries.
Factor Syntactically it's based on Forth, but has support for LISP-like functional programming as well as better support for classes.
Take a peek at Erlang. Originally, Erlang was intended for building fault-tolerant, highly parallel systems. It is a functional language, embracing immutability and first-class functions. It has an official Windows binary release, and the source can be compiled for many *NIX platforms (there is a MacPorts build, for example).
In terms of high-level features, Erlang support list comprehensions, pattern matching, guard clauses, structured data, and other things you would expect. It's relatively slow in sequential computation, but pretty amazing if you're doing parallel computation. Erlang does run on a VM, but it runs on its own VM, which is part of the distribution.
Erlang, while not strictly object-oriented, does benefit from an OO mindset. Erlang uses a thing called a process as its unit of concurrency. An Erlang process is actually a lot like a native thread, except with much less overhead. Each process has a mailbox, will be sent messages, and will process those messages. It's easy enough to treat processes as if they were objects.
I don't know if it has much in the way of scientific libraries. It might not be a good fit for your needs, but it's a cool language that few people seem to know about.
Are you sure that you really need a functional language? I did most of my programming in lisp, which is obviously a functional language, but I have found that functional programming is more of a mind-set than a language feature. I'm using VB right now, which I think is an excellent language (speed, support, IDE) and I basically use the same programming style that I did in lisp - functions call other functions that call other functions - and functions are usually 1-5 lines long.
I do know that Lisp has good performance, run on all platforms, but it is somewhat outdated in terms of how up to date support for features such as graphics, multi-threading etc. are.
i've taken a look at clojure but if you don't like java you probably won't like clojure. It's a functional-lisp-style language implemented on top of java - but you'll probably find yourself using java libraries all the time which adds the verbosoity of java. I like lisp but I didn't like clojure despite the hype.
Are you also sure about your performanc requirements? Matlab is an excellent language for a lot of scientific computation, but it is kind of slow and I hate reading it. You might find t useful though especially in conjunction with other languages, for prototypes/scenarios/subunits.
Many of your requirements are based on hearsay. One example: the idea that Mono is "terrible".
http://banshee-project.org/
That's the official media player of many Linux distributions. It's written in C#. (They don't even have a public Windows release of it!)
Your assertions about the relative performance of various languages are equally dubious. And requiring a language to not use a virtual machine is quite unrealistic and totally undesirable. Even an OS is a form of VM on which applications run, which virtualises the hardware devices of the machine.
Though you earn points for mentioning tools (although not with enough priority). As Knuth observed, the first question to ask about a language is "What's the debugger like?"
Looking over your requirements, I would recommend VB on either Mono, or a virtual machine running windows. As a previous poster said, the first thing to ask about a language is "What is the debugger like" and VB/C# have the best debugger. Just a result of all those Microsoft employees hammering on the debugger, and having the teams nearby to bug (no pun intended) into fixing it.
The best thing about VB and C# is the large set of developer tools, community, google help, code exapmles, libraries, softwaer that interfaces with it, etc. I've used a wide variety of software development environments over the past 27 years, and the only thing that comes close is the Xerox Lisp machine environmnets (better) and the Symbolics Lisp machines (worse).

Trivial mathematical problems as language benchmarks

Why do people insist on using trivial mathematical problems like finding numbers in the Fibonacci sequence for language benchmarks? Don't these usually get optimized to relativistic speeds? Isn't the brunt of the bottlenecks usually in I/O, system API calls, operations on strings and structures, processing large quantities of data, abstract object-oriented stuff, etc?
It is a throwback to the old days, when compiler technology for what we would now call basic math was still evolving rapidly.
Now, compiler evolution is more focused on exploiting new instructions for niche operations, 64-bit math, and so on.
Micro-benchmarks such as the ones you mention were useful, though, when evaluating the efficiency of the hotspot compiler when Java was first launched, and in evaluating the efficiency of .NET versus C/C++.
Your suggestion that I/O and system calls are the likely bottlenecks is correct, at least for some space of problems. But I notice you suggested string operations. One person's irrelevant micro-benchmark is another person's critical performance metric.
EDIT: ps, I also remember using linpack and other micro-benchmarks to compare versions of the JVM, and to compare vendors of the JVM. From v4 to v5 there was a big jump in perf, I guess the JIT compiler got more effective. Also, IBM's JVM was ahead of Sun's at that time, on Windows-x86.
Because if you want to benchmark the language/compiler, these "math problems" are good indicators of the "bare speed" of the generated code. Either they use the iterative solution, which is a tight loop and indicates how well can the compiler push the instructions to the processor, or they use the recursive solution, which indicates how does it handle recursive calls of short functions (inlining, tail-recursion etc.) (although the Ackermann function is usually used for that too).
Usually, the benchmark suite for the language contain tests benchmarking other parts as well - eg. gzip compression, text searching, object creation, virtual function call, exception throw/catch benchmarks.
The other things you've noticed, syscalls and IO are usually not included because
syscalls are in fact not that slow - applications don't spend significant porion of the time in the kernel, except for test specifically targeted at them or when something is seriously wrong with the program
syscall and IO performance does not depend on the language, but rather on the OS & hardware
I'd think a simple, well-established algorithm would remove the possibility that the benchmark is biased (whether through ignorance or malice) to favor one language. It is very difficult to write a complex program in two different languages exactly the same. Testing something like the efficiency of a multithreaded application in c# vs java, for example, would require developers skilled in multithreaded development both languages, and there would still be questions as to whether the benchmark app properly represents the general case, or if it is misrepresenting a special case that only one language handles well.
Back when the sieve of eratosthanes was a popular benchmark for C compilers, I thought it would be funny if one of the compiler authors would recognize the sieve code and replace it with a pre-computed lookup.

Resources