How do I compare programming languages for projects at work? - programming-languages

I am wondering what are some specific questions I should keep in mind when I am comparing programming languages for use on given work projects. For instance, I am told logic programming languages like Prolog are good for natural language processing. I'm not sure why exactly; I assume it is true because experts say so, but I don't know the consideration that guides them to that conclusion. So I am looking for a simple heuristic, a checklist of questions, I can apply to evaluate programming languages and be able to explain my decisions, so that I can say "Language X is good for Y because it does Z."

The only way I know of to figure out which programming language is most appropriate for a given problem, is to know lots of programming languages. After all, if you don't know screwdrivers exist, how will you know not to use a hammer when you encounter a screw?
Unfortunately, there are thousands (maybe tens of thousands) of programming languages, so learning even a significant portion of them is just not realistic.
However, programming languages implement paradigms. And Peter van Roy's famous poster only lists about 34 of those. Although he deliberately decided to ignore several aspects, including anything related to typing, so the real number is probably higher than that. But we can expect it to be well below 100.
That's still a lot, though, but thankfully, paradigms aren't atomic either, they are composed of concepts. The poster lists about a dozen of those (again ignoring typing and a couple of other things). Significantly less than paradigms.
Learning a significant portion of concepts is entirely feasible. Once you know them, you can look at a problem and see which concepts would be useful to have to build a solution. Then you look at which paradigms contain those concepts and which languages implement those paradigms. Pick one, learn it, use it, solve the problem.
And since you already know the concepts (and thus the paradigms) the language implements, you only need to learn the syntax, not the semantics. There aren't actually that many different syntaxes in the wild (C, C++, Objective-C, Objective-C++, D, Go, Java, C#, ECMAScript, PHP, Vala and many others share a lot of syntax, for example, as do Smalltalk, Self, Newspeak and Objective-C, SML, OCaml and F#, and so on), so chances are, you'll pick that up very quickly. (Besides, with today's modern IDEs that's much less of an issue anyway.)

One small point to bear in mind: if you are an expert in language X and you are asked to develop a program in domain Y for which language Z is supposed to be ideal -- will you deliver sooner and better by writing in the language you are an expert in even if it is not (by some measures) ideal for the problem domain ? Or will you deliver better and sooner by first learning a new language ?
I think your search for a simple heuristic is in vain.

Start with what you're team is familiar with. While there's a lot to be said for the philosophy that a great developer can pick up almost any language in short order, there's a practical side that goes to the fact that if you have a ton of .Net or Java coders, you're best served in starting from that base.
Now, within both stacks you have options on things like functional programming (F#, Erlang, etc.) and other languages on the runtime your team is most familiar with. But it really does boil down to the culture, infrastructure, and (most importantly) the experience and flexibility of the individual developers on your team.

There are several factors to consider:
What is your local expertise? If you have a company full of C programmers, it's probably not worth retraining everybody to be Lisp programmers.
What are your libraries? If you have libraries that you want to use, make sure that they are compatible with your language of choice.
If you are starting a new project with a wide-open field of options, I would recommend taking a few sample problems out of the application domain. Nothing too complex, but nothing trivial either. Then, implement (or have someone from your team implement) these samples in each candidate language. Then, choose the one that is clear, easy, and appropriate.

Related

Are new paradigms of programming always driven by a need?

I'm something of a programming language junkie, and examples abound...
Lisp was originally created as a practical mathematical notation for computer programs
Simula was designed for doing simulations, and gave us objects and classes
C was designed for implementing system software (specifically, the Unix operating system)
Erlang was designed with the aim of improving the development of telephony applications at Ericsson.
Languages like Perl and Ruby also, but these four gave birth to fundamental styles of computer programming, as opposed to "just" implementing an existing methodology or style of solving specific software engineering problems.
Is every new programming paradigm primarily driven by a need to solve a practical problem? Does every new programming language come about from a programmer scratching an itch?
As I plan to dedicate my life to research in new programming languages for AI, I'm wondering whether I should pursue the theory of programming intelligence directly, or attempt to solve practical problems in AI and then "discover" the paradigms to solve them.
No, most are. But you're forgetting Esoteric programming languages.
Example: http://www.dangermouse.net/esoteric/piet.html
A language that uses JPG's as code.
I think that every language is designed according to some need. That need of course can just be the language designers own desire for a more elegant language, a language he himself feels more comfortable programming in.
However, the language that are successful will very likely provide some solutions to a more general need. This need my not necesarely be evident at the time the language is designed, but for it to get recognition I think it has to be a need that is shared and gets shelved out as a general desire at some point.
There are probably a lot of languages out there that did not necessarely address a problem that is shared by many others or maybe even adresses needs that have not been wiledly realized as such.
To you concrete question: I think the best way to discover the shortcomings of current languages is to use them. Of course, the theory may help you to come up with appropriate solutions. So I'd say the best way is (as always) to have theoretical knowledge and practical experience.
You'd think nobody would invest time in inventing, refining, implementing, using and spreading a new way of doing things if "the old ways" worked just fine. And the real world, as in your examples, confirms this theory. All this still applies if we broaden the scope beyond programming paradigms. So I'd say it is safe to assume inventions are (partly) driven by a need for the thing invented.
*hits his smartass alter ego with a bat and takes over the talk*
As for the question in the last paragraph: If anyone knows whether you'll have more fun pursuing theory or solving real problems, it's you. I would choose practice any time of the day - but I'm not you. But from looking at (programming language) history I can tell that all no great (i.e. can be used to get things done) language came from theory. It is logical to assume one can't find a good tool for an application without knowing that application throughoutly from daily work.

domain specific languages and compilers

I was looking over Martin Fowler's recent book contents - Domain Specific Languages and I noticed some ANTLR example - that got me thinking that writing compilers will become more and more popular since people needs in this matter will increase.
So, will the compiler theory still be as arid (being subjective here) as it was until now or are there any chances that we'll get more applied, programmer oriented materials ?
Even though DSLs may seem to create more opportunities for creating new compilers, I don't think they will make the challenges of writing a compiler any easier. You can either use compiler tools like yacc to generate code to handle your dsl syntax, or you can hand carve your own parser with an eye towards better internal efficiency than what the yacc generators spit out.
Either way, you have to have sufficient knowledge of how to define and manipulate a language grammar to make your DSL work and to avoid loopholes and can't-get-there-from-here problems.
Spiffy tools help to implement the solution, but they don't solve the problem for you. To quote my high school chemistry teacher: "Sure! Bring your calculators to class! Calculators only help you get the wrong answer faster!"
So, will the compiler theory still be as arid (being subjective here) as it was until now or are there any chances that we'll get more applied, programmer oriented materials ?
I'd say that compiler theory is actually pretty rich, but may not centered around C style languages. If you want to look at some powerful tools commonly used by academic language designers, I suggest that you check out functional programming languages (ML, Scheme, LISP, Haskell, OCaml, Scala, Clojure, etc.). Personally I prefer Haskell with Parsec, but there are many options. I think the common consensus is that the structure of these languages is more conducive to language design and implementation, at least in a theoretical sense.
Like Kristopher said above, programmers don't necessarily make the best language designers. I've seen some really cool DSL's and I've seen some pretty awful ones (my opinion, of course, YMMV). Knowledge of language concepts is a must for designing any language, DSL or otherwise (Type theory, category theory, various code analyses, machine optimization, etc). Not to mention, if you're designing a DSL, you have to have a fairly intimate knowledge of the domain you're targeting.
Tools off the shelf like yacc, ANTLR, flex, and cup can make building your compiler easier like buying wood from a lumberyard to build your house is easier than going off into the woods and cutting down trees. Both get you the material for the structure, but you still have to know how to build the house. We will definitely see more DSLs in the near future and these tools will help. Will the DSLs be worth using or even useable, however? The tools won't make a difference here, at least in my opinion. Language design employs a lot of real computer science and/or mathematics. Good language designers will have to at least be familiar with both, and good language implementers must be familiar with language design tools.
As high-quality DSLs get easier to build, we are more likely to see more of them. There are several obstacles:
Choosing a good problem domain for a DSL. It has to broad enough to have appeal to more than the author, and narrow enough to have good solutions (C# doesn't count).
Implementing a DSL well. Lots of people seem to think if they have a parser they are done. Actually, you need a lot of technology: parsing, analysis, code generation, ... (See DMS Software Reengineering Toolkit for an engine that contains what I think is needed to produce DSLs effectively)
Acceptance of the DSL by the community. Its amazing how many people insist on coding in just the programming language they know, and nothing else.
There was an explosion of programming languages in the 70's and 80's. Then Java came along and killed everything off. Now we are in another phase of people inventing lots of languages. So, I'd say it is cyclical, and there is really nothing "new" going on.
However, one aspect that remains constant is that most programmers aren't very good at designing languages. Tools like yacc and ANTLR make some of the implementation easier, but they don't make would-be language designers any better at language design.
There already are some useful tools, look for Xtext, EMFText, Jetbrains MPS, Intentional Domain Workbench and Microsofts former OSLO project with the M language. All these tools make defining languages easier, although it has its cost, however for a DSL you might have a bit other requirements than for regular general purpose programming languages.

Typical tasks/problems to demonstrate differences between programming languages

Somewhere some guy said (I honestly do not know where I got this from), that one should learn one programming language per year. I can see where that might be a good idea, because you learn new patterns and ways to look at the same problems by solving them in different languages. Typically, when learning a new language, I look at how certain problems are supposed to be solved in that language. My question now is, what, in you experience, are good, simple, and clearly defined tasks that demostrate the differences between programming languages.
The Idea here is to have a set of tasks, that, when I solve all of them in the language I am learning, gives me a good overview of how things are supposed to be done in that language. I do not know if that is even possible, but it sure would be a useful thing to have.
A typical example one often sees especially in tutorials for functional languages is the implementation of quicksort.
Search for "Code Kata" for some resources.
Pick a problem. Solve it in different languages.
http://slott-softwarearchitect.blogspot.com/2009/08/code-kata-resources.html
In today's world, I don't think simple tasks like implementing a bubble sort will really give you a taste for that language. The reason being that several of them have C at their core (java, c#, php, javascript, etc).
Instead, go for small apps like a simple contact manager. This will allow you to work with the chosen language's UI, Database, and logic features.

The benefits of learning languages that you won't use [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have read numerous time that learning a language such as Haskell, Lisp or Smalltalk will somehow make you a better programmer while you program in other languages.
Is there more than just anecdotal evidence for that claim?
Or is it just the way people rationalize having spend a lot of time learning a programming language that they will never use?
IMHO, it is all about learning a new programming paradigm. If you know Java and then C#, there's not much gain, once both of them have almost the same "type of programming".
But if you get to learn a functional language or dynamic, for instance, you're forced to think another way, and that will probably help you to program better in your favorite language.
It is something like: "It is so easy doing this in {different language you learned}. There must be a better way to do this in {language you already know}". And then you rethink, and build up a more elegant way to do this in {language you already know}.
I don't have any hard evidence, but I have really appreciated the different way of looking at problems that I have since learning lisp (the same goes for python and c).
The key isn't necessarily learning different languages though, I believe that the key is actually the different viewpoints that you gain by learning different programming styles.
Good examples are functional, imperative, object-oriented, etc. Also, there are common design differences is interpreted vs compiled languages; static vs dynamic typing, etc.
Although most people do a majority of their programming using a single style (most commonly OOP over the past few years), I think that all programmers should know multiple styles so that they are better able to see the shortfalls of their own style.
Can't shed much light on this in terms of programming languages, but it seems very similar to the "why learn a dead language?" argument that surrounds Latin, and much of the reasoning there can be applied here.
Programming is a way of thinking, not writing code in programming language X: that is "coding", not "programming".
By knowing at least something about more than just one programming language - preferably across different paradigms, so imperative/OOP/functional/logical - you train that way of thinking about problems outside the context of the specific details and quirks of language X.
I think this always improves your abilities to be(come) a better programmer tremendously.
A great side-effect of learning new languages is the potential for application in your existing language.
For instance, I'm a Java programmer and I took the time to learn my first functional language (Haskell). I was recently asked to learn Scala for an upcoming project. This is extremely easy since I understand the comcepts of guards, recursion, etc. from Haskell.
Deeply learning language just for learning language has too little benefits. If you have a lot of tasks and you don't know language that ideal for solving it then it is make sense to learn that language. Otherwise it is make sense to spend the time to become expert in languages you already know.
I don't know that there will have been much rigorous study regarding the benefits of multi-programming language exposure on overall programming ability, but I would argue that the studies regarding why learning a foreign human language (which you may never use in practice) is beneficial would in general hold equally well for studying foreign programming languages. The benefits ascribed often include improved cognitive abilities as well as improved understanding of one's native language.
here's some links to studies
anecdotally, I complained a great deal about taking COBOL, and have never really used it but was able to apply things I learned in that class at my first job.
If you give any credence to the Pragmatic Programming guys, consider their advice from page 14 of their first book:
Learn at least one new language every year. Different languages
solve the same problems in different ways. By learning several
different approaches, you can help broaden your thinking and avoid
getting stuck in a rut.
Some examples that come to mind:
Knowing C and having to deal with memory management and do-it-yourself data structures can help you understand performance issues when programming in a higher level language where those details are hidden from you.
Conversely, learning an OO language can affect your C programming - with, for example, the concept of Polymorphism prompting you to use function pointers in ways you might not have otherwise.
Learning a language where functions are first class objects that can be passed around can make you think of similar techniques in other languages, even if, in those other languages, you have to make the functions methods in objects that get passed around.
Learning about the way Erlang handles concurrency can make you rethink how much shared state you use between threads in other languages.
Any language that has a built-in feature you find useful can prompt you to implement your own version of that feature in another language that doesn't have it, and thus allow you to solve problems in ways you might not have thought of if you hadn't been exposed to the feature in the language that has it built-in.
Learning about Interfaces in Java can make you think about the benefits of precisely specifying your (small "i") interfaces in other languages that don't have them as a formal construct in a type system.
No doubt there are others.
Learning a language is not a binary event. If you are a decent programmer, you should be able to trust your own instincts as to whether a language offers you a new take on your craft.
Virtually every language worth considering these days can be downloaded and test-driven in a couple of minutes. So do it -- pick one and try it out.
There are a limited number of cases where this "laissez-faire" approach falls short. If you're a complete beginner, of course it doesn't work. When I first learned C, I had to have it beaten into me, but it did turn out to be worth it because it made me understand pointers, memory reference and dynamic allocation in a way I hadn't previously.
But if you know that much already, just poke around and look for a language that makes your lightbulb go on.
Different languages have different ways of implementing the same ideas. By learning new languages, you get a different perspective on how things can be accomplished, and can then use that knowledge to approach how you program in your current environment. Think about object oriented and functional programming. OO Programmers can learn a lot about parrellization from languages like C.
Learning a language, especially one that practices a new paradigm, is very beneficial for every programmer. For example learning Scheme will help someone understand functional programming. The programmer can later practice what he/she learned with other languages like C#. She can think of new ways of doing things.
Also, as languages evolve, it's high likely that the language you use will adopt some features of other languages. Having taught myself Ruby, I was able to grasp the changes in C# 3.0 much easier.
I think learning languages will always benefit you even if you don't use them again. I started playing with Ioke as an attempt to learn something experimental and because of it my JavaScript has improved because certain ideas have been cemented.
learning a new language will possibly give you new insights that you will try translate to your main language.
I don't think there will be any hard evidence--I think this is more of an intuitive thing. Learning a totally different language will help you look at things totally different. Or maybe it won't. In any case, what's the harm in learning something?
It's entirely subjective, but way back when, after taking an undergraduate course in Haskell, I did notice that my programming style in C became more 'Haskell-like' for a while; I used a lot of simple, recursive functions. I also noticed that this programming style seemed to yield some of the same benefits programming in Haskell had; bugs were fewer, code was easier to understand (albeit slower).
So, while learning another programming language may not make everyone a better programmer, it definitely was a learning experience for me, personally.
What are the benefits of learning mathematics or physics that you won't use, or the benefits of studying philosophy or dead tongues?
It's the intellectual achievement and the enlightenment what matters, you will be a wiser person with any new thing that you learn, no matter if they are programming languages, literature, role playing games... of course if it's related to your working field, then you'll actually find a use, sooner or later :-)
I spent some time studying clojure even though I knew I wouldn't use it in the near-term (mostly because I can't really deploy on the JVM).
It has concepts that aren't supported by the languages I use (C#/C/C++/Python/Perl) and I wanted to know what I was missing and also if it would be worth looking into libraries that purport to add these features.
Specifically, I'm very interested in understanding Lisp-style macros and the direct concurrency support. I also spent some time reading the implementation, specifically the datastructures, which was very educational -- good to see a quality implementation of persistent datastructures to learn how they work (and give you immutability without sacrificing much performance).
Bryond what has already been said, I really like new languages just because it can bring new interest to programming. You learn different ways to approach problems and the strengths/weaknesses of certain languages. It is something new to learn and any good programmer should be striving to always be learning new things. It mixes up your daily routine of possibly programming in the same language for years.
I also like what everyone has said about programming perspective.
Some good points have been made.
I would add that learning languages you won't use in production work can be of value
To better appreciate and absorb the arguments and methods in texts and papers that will improve programming ability in languages I do use for production work (e.g. MIX/MMIX for Knuth's Art of Computer Programming; RATFOR for Kernighan and Plauger's Software Tools; I still use some ALGOL-based syntax for some pseudocode although I never wrote runnable code in ALGOL outside University)
To be able to check or prototype programs that will be written in a different language (e.g. some routines for numerical computing in C can be quickly checked or scaled using languages that have appropriate functionality built in such as Fortran, Python or Haskell)
Learning a new language can give insight as to how it could be used to more easily solve problems that were put to one side because of time or complexity constraints.

Why functional languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I see a lot of talk on here about functional languages and stuff. Why would you use one over a "traditional" language? What do they do better? What are they worse at? What's the ideal functional programming application?
Functional languages use a different paradigm than imperative and object-oriented languages. They use side-effect-free functions as a basic building block in the language. This enables lots of things and makes a lot of things more difficult (or in most cases different from what people are used to).
One of the biggest advantages with functional programming is that the order of execution of side-effect-free functions is not important. For example, in Erlang this is used to enable concurrency in a very transparent way.
And because functions in functional languages behave very similar to mathematical functions it's easy to translate those into functional languages. In some cases, this can make code more readable.
Traditionally, one of the big disadvantages of functional programming was also the lack of side effects. It's very difficult to write useful software without I/O, but I/O is hard to implement without side effects in functions. So most people never got more out of functional programming than calculating a single output from a single input. In modern mixed-paradigm languages like F# or Scala this is easier.
Lots of modern languages have elements from functional programming languages. C# 3.0 has a lot functional programming features and you can do functional programming in Python too. I think the reasons for the popularity of functional programming is mostly because of two reasons: Concurrency is getting to be a real problem in normal programming, because we're getting more and more multiprocessor computers; and the languages are getting more accessible.
I don't think that there's any question about the functional approach to programming "catching on", because it's been in use (as a style of programming) for about 40 years. Whenever an OO programmer writes clean code that favors immutable objects, that code is borrowing functional concepts.
However, languages that enforce a functional style are getting lots of virtual ink these days, and whether those languages will become dominant in the future is an open question. My own suspicion is that hybrid, multi-paradigm languages such as Scala or OCaml
will likely dominate over "purist" functional languages in the same way that pure OO language (Smalltalk, Beta, etc.) have influenced mainstream programming but haven't ended up as the most widely-used notations.
Finally, I can't resist pointing out that your comments re FP are highly parallel to the remarks I heard from procedural programmers not that many years ago:
The (mythical, IMHO) "average" programmer doesn't understand it.
It's not widely taught.
Any program you can write with it can be written another way with current techniques.
Just as graphical user interfaces and "code as a model of the business" were concepts that helped OO become more widely appreciated, I believe that increased use of immutability and simpler (massive) parallelism will help more programmers see the benefits that the functional approach offers. But as much as we've learned in the past 50 or so years that make up the entire history of digital computer programming, I think we still have much to learn. Twenty years from now, programmers will look back in amazement at the primitive nature of the tools we're currently using, including the now-popular OO and FP languages.
The main plus for me is its inherent parallelism, especially as we are now moving away from higher CPU clock frequency and towards more and more cores.
I don't think it will become the next programming paradigm and completely replace OO type methods, but I do think we will get to the point that we need to either write some of our code in a functional language, or our general purpose languages will grow to include more functional constructs.
Even if you never work in a functional language professionally, understanding functional programming will make you a better developer. It will give you a new perspective on your code and programming in general.
I say there's no reason to not learn it.
I think the languages that do a good job of mixing functional and imperative style are the most interesting and are the most likely to succeed.
I'm always skeptical about the Next Big Thing. Lots of times the Next Big Thing is pure accident of history, being there in the right place at the right time no matter whether the technology is good or not. Examples: C++, Tcl/Tk, Perl. All flawed technologies, all wildly successful because they were perceived either to solve the problems of the day or to be nearly identical to entrenched standards, or both. Functional programming may indeed be great, but that doesn't mean it will be adopted.
But I can tell you why people are excited about functional programming: many, many programmers have had a kind of "conversion experience" in which they discover that using a functional language makes them twice as productive (or maybe ten times as productive) while producing code that is more resilient to change and has fewer bugs. These people think of functional programming as a secret weapon; a good example of this mindset is Paul Graham's Beating the Averages. Oh, and his application? E-commerce web apps.
Since early 2006 there has also been some buzz about functional programming and parallelism. Since people like Simon Peyton Jones have been worrying about parallelism off and on since at least 1984, I'm not holding my breath until functional languages solve the multicore problem. But it does explain some of the additional buzz right about now.
In general, American universities are doing a poor job teaching functional programming. There's a strong core of support for teaching intro programming using Scheme, and Haskell also enjoys some support there, but there's very little in the way of teaching advanced technique for functional programmer. I've taught such a course at Harvard and will do so again this spring at Tufts. Benjamin Pierce has taught such a course at Penn. I don't know if Paul Hudak has done anything at Yale. The European universities are doing a much better job; for example, functional programming is emphasized in important places in Denmark, the Netherlands, Sweden, and the UK. I have less of a sense of what's happening in Australasia.
I don't see anyone mentioning the elephant in the room here, so I think it's up to me :)
JavaScript is a functional language. As more and more people do more advanced things with JS, especially leveraging the finer points of jQuery, Dojo, and other frameworks, FP will be introduced by the web-developer's back-door.
In conjunction with closures, FP makes JS code really light, yet still readable.
Cheers,
PS
Most applications are simple enough to be solved in normal OO ways
OO ways have not always been "normal." This decade's standard was last decade's marginalized concept.
Functional programming is math. Paul Graham on Lisp (replace Lisp by functional programming):
So the short explanation of why this
1950s language is not obsolete is that
it was not technology but math, and
math doesn’t get stale. The right
thing to compare Lisp to is not 1950s
hardware, but, say, the Quicksort
algorithm, which was discovered in
1960 and is still the fastest
general-purpose sort.
I bet you didn't know you were functional programming when you used:
Excel formulas
Quartz Composer
JavaScript
Logo (Turtle graphics)
LINQ
SQL
Underscore.js (or Lodash),
D3
The average corporate programmer, e.g.
most of the people I work with, will
not understand it and most work
environments will not let you program
in it
That one is just a matter of time though. Your average corporate programmer learns whatever the current Big Thing is. 15 years ago, they didn't understand OOP.
If functional programming catches on, your "average corporate programmers" will follow.
It's not really taught at universities
(or is it nowadays?)
It varies a lot. At my university, SML is the very first language students are introduced to.
I believe MIT teaches Lisp as a first-year course. These two examples may not be representative, of course, but I believe most universities at the very least offer some optional courses on functional programming, even if they don't make it a mandatory part of the curriculum.
Most applications are simple enough to
be solved in normal OO ways
It's not really a matter of "simple enough" though. Would a solution be simpler (or more readable, robust, elegant, performant) in functional programming? Many things are "simple enough to be solved in Java", but it still requires a godawful amount of code.
In any case, keep in mind that functional programming proponents have claimed that it was the Next Big Thing for several decades now. Perhaps they're right, but keep in mind that they weren't when they made the same claim 5, 10 or 15 years ago.
One thing that definitely counts in their favor, though, is that recently, C# has taken a sharp turn towards functional programming, to the extent that it's practically turning a generation of programmers into functional programming programmers, without them even noticing. That might just pave the way for the functional programming "revolution". Maybe. ;)
Man cannot understand the perfection and imperfections of his chosen art if he cannot see the value in other arts. Following rules only permits development up to a point in technique and then the student and artist has to learn more and seek further. It makes sense to study other arts as well as those of strategy.
Who has not learned something more about themselves by watching the activities of others? To learn the sword study the guitar. To learn the fist study commerce. To just study the sword will make you narrow-minded and will not permit you to grow outward.
-- Miyamoto Musashi, "A Book of Five Rings"
One key feature in a functional language is the concept of first-class functions. The idea is that you can pass functions as parameters to other functions and return them as values.
Functional programming involves writing code that does not change state. The primary reason for doing so is so that successive calls to a function will yield the same result. You can write functional code in any language that supports first-class functions, but there are some languages, like Haskell, which do not allow you to change state. In fact, you're not supposed to make any side effects (like printing out text) at all - which sounds like it could be completely useless.
Haskell instead employs a different approach to I/O: monads. These are objects that contain the desired I/O operation to be executed by your interpreter's toplevel. At any other level they are simply objects in the system.
What advantages does functional programming provide? Functional programming allows coding with fewer potentials for bugs because each component is completely isolated. Also, using recursion and first-class functions allows for simple proofs of correctness which typically mirror the structure of the code.
I don't think most realistic people think that functional programming will catch on (becomes the main paradigm like OO). After all, most business problems are not pretty math problems but hairy imperative rules to move data around and display them in various ways, which means it's not a good fit for pure functional programming paradigm (the learning curve of monad far exceeds OO.)
OTOH, functional programming is what makes programming fun. It makes you appreciate the inherent, timeless beauty of succinct expressions of the underlying math of the universe. People say that learning functional programming will make you a better programmer. This is of course highly subjective. I personally don't think that's completely true either.
It makes you a better sentient being.
I'd point out that everything you've said about functional languages, most people were saying about object-oriented langauges about 20 years ago. Back then it was very common to hear about OO:
* The average corporate programmer, e.g. most of the people I work with, will not understand it and most work environments will not let you program in it
* It's not really taught at universities (or is it nowadays?)
* Most applications are simple enough to be solved in normal IMPERATIVE ways
Change has to come from somewhere. A meaningful and important change will make itself happen regardless of whether people trained in earlier technologies take the opinion that change isn't necessary. Do you think the change to OO was good despite all the people that were against it at the time?
I must be dense, but I still don't get it. Are there any actual examples of small application's written in a functional language like F# where you can look at the source code and see how and why it was better to use such an approach than, say, C#?
F# could catch on because Microsoft is pushing it.
Pro:
F# is going to be part of next version of Visual Studio
Microsoft is building community for some time now - evangelists, books, consultants that work with high profile customers, significant exposure at MS conferences.
F# is first class .NET language and it's the first functional language that comes with really big foundation (not that I say that Lisp, Haskell, Erlang, Scala, OCaml do not have lots of libraries, they are just not as complete as .NET is)
Strong support for parallelism
Contra:
F# is very hard to start even if you are good with C# and .NET - at least for me :(
it will probably be hard to find good F# developers
So, I give 50:50 chance to F# to become important. Other functional languages are not going to make it in near future.
I think one reason is that some people feel that the most important part of whether a language will be accepted is how good the language is. Unfortunately, things are rarely so simple. For example, I would argue that the biggest factor behind Python's acceptance isn't the language itself (although that is pretty important). The biggest reason why Python is so popular is its huge standard library and the even bigger community of third-party libraries.
Languages like Clojure or F# may be the exception to the rule on this considering that they're built upon the JVM/CLR. As a result, I don't have an answer for them.
It seems to me that those people who never learned Lisp or Scheme as an undergraduate are now discovering it. As with a lot of things in this field there is a tendency to hype and create high expectations...
It will pass.
Functional programming is great. However, it will not take over the world. C, C++, Java, C#, etc will still be around.
What will come of this I think is more cross-language ability - for example implementing things in a functional language and then giving access to that stuff in other languages.
When reading "The Next Mainstream Programming Language: A Game Developer’s Perspective" by Tim Sweeney, Epic Games, my first thought was - I got to learn Haskell.
PPT
Google's HTML Version
Most applications can be solved in [insert your favorite language, paradigm, etc. here].
Although, this is true, different tools can be used to solve different problems. Functional just allows another high (higher?) level abstraction that allows to do our jobs more effectively when used correctly.
Things have been moving in a functional direction for a while. The two cool new kids of the past few years, Ruby and Python, are both radically closer to functional languages than what came before them — so much so that some Lispers have started supporting one or the other as "close enough."
And with the massively parallel hardware putting evolutionary pressure on everyone — and functional languages in the best place to deal with the changes — it's not as far a leap as it once was to think that Haskell or F# will be the next big thing.
It's catching on because it's the best tool around for controlling complexity.
See:
- slides 109-116 of Simon Peyton-Jones talk "A Taste of Haskell"
- "The Next Mainstream Programming Language: A Game Developer's Perspective" by Tim Sweeney
Check out Why Functional Programming Matters.
Have you been following the evolution of programming languages lately? Every new release of all mainstream programming languages seems to borrow more and more features from functional programming.
Closures, anonymous functions, passing and returning functions as values used to be exotic features known only to Lisp and ML hackers. But gradually, C#, Delphi, Python, Perl, JavaScript, have added support for closures. It's not possible for any up-and-coming language to be taken seriously without closures.
Several languages, notably Python, C#, and Ruby have native support for list comprehensions and list generators.
pioneered generic programming in 1973, but support for generics ("parametric polymorphism") has only become an industry standard in the last 5 years or so. If I remember correctly, Fortran supported generics in 2003, followed by Java 2004, C# in 2005, Delphi in 2008. (I know C++ has supported templates since 1979, but 90% of discussions on C++'s STL start with "here there be demons".)
What makes these features appealing to programmers? It should be plainly obvious: it helps programmers write shorter code. All languages in the future are going to support—at a minimum—closures if they want to stay competitive. In this respect, functional programming is already in the mainstream.
Most applications are simple enough to
be solved in normal OO ways
Who says can't use functional programming for simple things too? Not every functional program needs to be a compiler, theorem prover, or massively parallel telecommunications switch. I regularly use F# for ad hoc throwaway scripts in addition to my more complicated projects.
Wow - this is an interesting discussion. My own thoughts on this:
FP makes some tasks relatively simple (compared to none-FP languages).
None-FP languages are already starting to take ideas from FP, so I suspect that this trend will continue and we will see more of a merge which should help people make the leap to FP easier.
I don't know whether it will catch on or not, but from my investigations, a functional language is almost certainly worth learning, and will make you a better programmer. Just understanding referential transparency makes a lot of design decisions so much easier- and the resulting programs much easier to reason about. Basically, if you run into a problem, then it tends to only be a problem with the output of a single function, rather than a problem with an inconsistant state, which could have been caused by any of the hundreds of classes/methods/functions in an imparative language with side effects.
The stateless nature of FP maps more naturally to the stateless nature of the web, and thus functional languages lend themselves more easily to more elegant, RESTFUL webapps. Contrast with JAVA and .NET frameworks that need to resort to horribly ugly HACKS like VIEWSTATE and SESSION keys to maintain application state, and maintain the (occasionally quite leaky) abstraction of a stateful imperative language, on an essentially stateless functional platform like the web.
And also, the more stateless your application, the more easily it can lend itself to parallel processing. Terribly important for the web, if your website happens to get popular. It's not always straightforward to just add more hardware to a site to get better performance.
My view is that it will catch on now that Microsoft have pushed it much further into the mainstream. For me it's attractive because of what it can do for us, because it's a new challenge and because of the job opportunities it resents for the future.
Once mastered it will be another tool to further help make us more productive as programmers.
A point missed in the discussion is that the best type systems are found in contemporary FP languages. What's more, compilers can infer all (or at least most) types automatically.
It is interesting that one spends half the time writing type names when programming Java, yet Java is by far not type safe. While you may never write types in a Haskell programm (except as a kind of compiler checked documentation) and the code is 100% type safe.
I agree with the first point, but times change. Corporations will respond, even if they're late adopters, if they see that there's an advantage to be had. Life is dynamic.
They were teaching Haskell and ML at Stanford in the late 1990s. I'm sure that places like Carnegie Mellon, MIT, Stanford, and other good schools are presenting it to students.
I agree that most "expose relational databases on the web" applications will continue in that vein for a long time. Java EE, .NET, Ruby on Rails, and PHP have evolved some pretty good solutions to that problem.
You've hit on something important: It might be the problem that can't be solved easily by other means that will boost functional programming. What would that be?
Will massive multicore hardware and cloud computing push them along?
Because functional programming has significant benefits in terms of productivity, reliability and maintainability. Many-core may be a killer application that finally gets big corporations to switch over despite large volumes of legacy code. Furthermore, even big commercial languages like C# are taking on a distinct functional flavour as a result of many-core concerns. Side effects simply don't fit well with concurrency and parallelism.
I do not agree that "normal" programmers won't understand it. They will, just like they eventually understood OOP (which is just as mysterious and weird, if not more so).
Also, most universities do teach functional programming , many even teach it as the first programming course.
In addition to the other answers, casting the solution in pure functional terms forces one to understand the problem better. Conversely, thinking in a functional style will develop better* problem solving skills.
*Either because the functional paradigm is better or because it will afford an additional angle of attack.

Resources