New or not so well-known paradigms, syntax features and behaviours of programming languages? - programming-languages

I've designed some educational programming languages and interpreters for them, but my problem always was that they ended up "normal" and "boring", mostly similar to some kind of existing language (ASM and BASIC).
I find it really hard to come up with new ideas for syntax features, "neat things" and new or very modified programming paradigms for it. I always thought that it was hard to come up with good new things not fun/useless new things for this case.
I wondered if you could help me out with your creativity:
What features in terms of language syntax and built-in functions as well as maybe even new paradigms can I work into my language to keep it useless but more fun, enjoyable, interesting and/or different to program in?

I always thought that it was hard to come up with good new things
You were right. This is why John Backus, Ken Iverson, Niklaus Wirth, Robin Milner, Kristen Nygaard and Ole-Johan Dahl, Alan Kay, and Barbara Liskov all won Turing Awards—they contributed good new ideas to the design of programming languages.
If you want to add a dash of interest to your own designs, these are excellent people to steal from.

Both ASM and BASIC are imperative languages, so you might want to consider features of functional programming languages, especially lambdas and maps. You might also want to consider interesting flows of control, for example, being able to throw an exception and then later, as a result of catching the exception and making a certain call, resume from the point that the exception was thrown (albeit using a modified environment). Also, co-routines, or other forms of language-level parallelism are often interesting.

In addition to Michael's comment on functional languages, look at closures and blocks (like they're done in Objective-C). Those let you treat functions or pieces of code as first-class objects that you can pass around and call on demand. Some cool stuff can be done with that, and it's also shaping up to becoming the paradigm for programming massively multi-core systems.
You could also look into currying, which means binding some of a function's parameters, so you can then use it on fewer arguments. That way, you could create a base-b logarithm function, which you could curry to create functions for the base-2, base-10, etc. logarithm.
And something less functional (as in language): look at Ruby's way of treating everything as an object (even numbers), you can do quite a bit with that. Like an object-oriented runtime with introspection, an interpreter "for free," etc. Implementing OOP stuff is easier than you'd think.
A lot of stuff has been done in the last 30-odd years, don't restrict yourself to 70s-style programming! ;) If you're looking for inspiration, check out Ruby, Python, Scala, Objective-C, JavaScript (read Douglas Crockford's JavaScript: The Good Parts), etc.

The Esolang wiki gives a good sample of the weirds and wonderfuls of all kinds of esoteric programming languages, including many user creations. Perhaps some inspiration for something sane lies therein.

look at Forth. It is something original. Too original.

intercal has plenty of unusual language features B-)

I've always thought it would be neat to apply CSP to a stack based language. Could get pretty interesting.

See Wikipedia: Programming Languages. There are many useful links, especially in the Taxonomies section.
So much of the "new" is really just "forgotten old". I will hold my thoughts on some of the "popular" programming languages of the day.

There are many things that could be explored and active research is being done on some of them. Some of the things I think would be useful are:
real continuations in a non-functional language
here is an attempt to add them on to C++: http://mainisusuallyafunction.blogspot.co.nz/2012/02/continuations-in-c-with-fork.html
languages that let the user create new syntax elements
FORTH and J might be starting points.
Pogoscript is interesting as well because flow control constructs like if/elseif/else and while/wend arten't special can be created in user code.
custom user defined operators actually aren't new: I think Haskell, Nemerle, Kaleidoscope and several others already do this but even that wouldn't be "boring"

Related

How do I compare programming languages for projects at work?

I am wondering what are some specific questions I should keep in mind when I am comparing programming languages for use on given work projects. For instance, I am told logic programming languages like Prolog are good for natural language processing. I'm not sure why exactly; I assume it is true because experts say so, but I don't know the consideration that guides them to that conclusion. So I am looking for a simple heuristic, a checklist of questions, I can apply to evaluate programming languages and be able to explain my decisions, so that I can say "Language X is good for Y because it does Z."
The only way I know of to figure out which programming language is most appropriate for a given problem, is to know lots of programming languages. After all, if you don't know screwdrivers exist, how will you know not to use a hammer when you encounter a screw?
Unfortunately, there are thousands (maybe tens of thousands) of programming languages, so learning even a significant portion of them is just not realistic.
However, programming languages implement paradigms. And Peter van Roy's famous poster only lists about 34 of those. Although he deliberately decided to ignore several aspects, including anything related to typing, so the real number is probably higher than that. But we can expect it to be well below 100.
That's still a lot, though, but thankfully, paradigms aren't atomic either, they are composed of concepts. The poster lists about a dozen of those (again ignoring typing and a couple of other things). Significantly less than paradigms.
Learning a significant portion of concepts is entirely feasible. Once you know them, you can look at a problem and see which concepts would be useful to have to build a solution. Then you look at which paradigms contain those concepts and which languages implement those paradigms. Pick one, learn it, use it, solve the problem.
And since you already know the concepts (and thus the paradigms) the language implements, you only need to learn the syntax, not the semantics. There aren't actually that many different syntaxes in the wild (C, C++, Objective-C, Objective-C++, D, Go, Java, C#, ECMAScript, PHP, Vala and many others share a lot of syntax, for example, as do Smalltalk, Self, Newspeak and Objective-C, SML, OCaml and F#, and so on), so chances are, you'll pick that up very quickly. (Besides, with today's modern IDEs that's much less of an issue anyway.)
One small point to bear in mind: if you are an expert in language X and you are asked to develop a program in domain Y for which language Z is supposed to be ideal -- will you deliver sooner and better by writing in the language you are an expert in even if it is not (by some measures) ideal for the problem domain ? Or will you deliver better and sooner by first learning a new language ?
I think your search for a simple heuristic is in vain.
Start with what you're team is familiar with. While there's a lot to be said for the philosophy that a great developer can pick up almost any language in short order, there's a practical side that goes to the fact that if you have a ton of .Net or Java coders, you're best served in starting from that base.
Now, within both stacks you have options on things like functional programming (F#, Erlang, etc.) and other languages on the runtime your team is most familiar with. But it really does boil down to the culture, infrastructure, and (most importantly) the experience and flexibility of the individual developers on your team.
There are several factors to consider:
What is your local expertise? If you have a company full of C programmers, it's probably not worth retraining everybody to be Lisp programmers.
What are your libraries? If you have libraries that you want to use, make sure that they are compatible with your language of choice.
If you are starting a new project with a wide-open field of options, I would recommend taking a few sample problems out of the application domain. Nothing too complex, but nothing trivial either. Then, implement (or have someone from your team implement) these samples in each candidate language. Then, choose the one that is clear, easy, and appropriate.

What is a computer programming language?

At the risk of sounding naive, I ask this question in search of a deeper understanding of the concept of programming languages in general. I write this question for my own edification and the edification of others.
What is a useful definition of a computer programming language and what are its basic and necessary components? What are the key features that differentiate languages (functional, imperative, declarative, object oriented, scripting, etc...)?
One way to think about this question. Imagine you are looking at the hardware of a modern desktop or laptop computer. Assume, that the C language or any of its variants do not exist. How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Tangentially related, what is it about computer languages that allow other languages to exist? For example take a scripting language like Javascript, Perl, or PHP. I assume part of the definition of these is that there is an interpreter most likely implemented in C or C++ at some level. Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
I would be satisfied with a list of concepts that can be looked up or researched further.
Like any language, programming languages are simply a communication tool for expressing and conveying ideas. In this case, we're translating our ideas of how software should work into a structured and methodical form that computers (as well as other humans who know the language, in most cases) can read and understand.
What is a useful definition of a computer programming language and what are its basic and necessary components?
I would say the defining characteristic of a programming language is as follows: things written in that language are intended to eventually be transformed into something that is executed. Thus, pseudocode, while perhaps having the structure and rigor of a programming language, is not actually a programming language. Likewise, UML can express many powerful ideas in an abstract manner just like a programming language can, but it falls short because people don't generally write UML to be executed.
How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Even if the word "programming language" wasn't part of the shared vocabulary of the group I was talking to, I think it would be obvious to the others that we'd need a way to communicate with the computer. Just as no one expects a car to drive itself (yet!) without external instructions in the form of interaction with the steering wheel and pedals, no one could expect the hardware to function without being told what to do. As noted above, a programming language is the conduit through which we can make that communication happen.
Tangentially related, what is it about computer languages that allow other languages to exist?
All useful programming languages have a property called Turing completeness. If one language in the Turing-complete set can do something, then any of them can; they are said to be computationally equivalent.
However, just because they're equally "powerful" doesn't mean they're equally nice to work with for humans. This is why many people are willing to sacrifice the unparalleled micromanagement you get from writing assembly code in exchange for the expressiveness and power you get with higher-level languages, like Ruby, Python, or C#.
Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
Since there is a Javascript interpreter written in C, it follows that it must be possible to write a Javascript interpreter in Javascript, since both are Turing-complete. However, again, note that Turing-completeness says nothing about how hard it is to do something in one language versus another -- only whether it is possible to begin with. Your Javascript-interpreter-inside-Javascript might well be horrendously inefficient, consume absurd amounts of memory, require enormous processing power, and be a hideously ugly hack. But Turing-completeness guarantees it can be done!
While this doesn't directly answer your question, I am reminded of the Revenge of the Nerds essay by Paul Graham about the evolution of programming languages. It's certainly an interesting place to start your investigation.
Not a definition, but I think there are essentially two strands of development in programming languages:
Those working their way up from what the machine can do to something more expressive and less tied to the machine (Assembly, Fortran, C, C++, Java, ...)
Those going down from some mathematical or theoretical computer science concept of computation to something implementable on a real machine (Lisp, Prolog, ML, Haskell, ...)
Of course, in reality the picture is not as neat, and both strands influence each other by borrowing the best ideas.
Slightly long rant ahead.
A computer language is actually not all that different from a human language. Both are used to express ideas and concepts in commonly understood terms. Among different human languages there are syntactic differences, but you can express the same thing in every language (does that make human languages Turing complete? :)). Some languages are better suited for expressing certain things than others.
For example, although technically not completely correct, the Inuit language seems quite suited to describe various kinds of snow. Japanese in my experience is very suitable for expressing ones feelings and state of mind thanks to a large, concise vocabulary in that area. German is pretty good for being very precise thanks to largely unambiguous grammar.
Different programming languages have different specialities as well, but they mostly differ in the level of detail required to express things. The big difference between human and programming languages is mostly that programming languages lack a lot of vocabulary and have very few "grammatical" rules. With libraries you can extend the vocabulary of a language though.
For example:
Make me coffee.
Very easy to understand for a human, but only because we know what each of the words mean.
coffee : a drink made from the roasted and ground beanlike seeds of a tropical shrub
drink : a liquid that can be swallowed
swallow : cause or allow to pass down the throat
... and so on and so on
We know all these definitions by heart, but we had to learn them at some point.
In the same way, a computer can be "taught" to "understand" words as well.
Coffee::make()->giveTo($me);
This could be a perfectly valid expression in a computer language. If the computer "knows" what Coffee, make() and giveTo() means and if $me is defined. It expresses the same idea as the English sentence, just with a different, more rigorous syntax.
In a different environment you'd have to say slightly different things to get the same outcome. In Japanese for example you'd probably say something like:
コーヒーを作ってもらっても良いですか?
Kōhī o tsukuttemoratte mo ii desu ka?
Which would roughly translate to:
if ($Person->isAgreeable('Coffee::make()')) {
return $Person->return(Coffee::make());
}
Same idea, same outcome, but the $me is implied and if you don't check for isAgreeable first you may get a runtime error. In computer terms that would be somewhat analogous to Ruby's implied behaviour of returning the result of the last expression ("grammatical feature") and checking for available memory first (environmental necessity).
If you're talking to a really slow person with little vocabulary, you probably have to explain things in a lot more detail:
Go to the kitchen.
Take a pot.
Fill the pot with water.
...
Just like Assembler. :o)
Anyway, the point being, a programming language is actually a language just like a human language. Their syntax is different and specialized for the problem domain (logic/math) and the "listener" (computers), but they're just ways to transport ideas and concepts.
EDIT:
Another point about "optimization for the listener" is that programming languages try to eliminate ambiguity. The "make me coffee" example could, technically, be understood as "turn me into coffee". A human can tell what's meant intuitively, a computer can't. Hence in programming languages everything usually has one and one meaning only. Where it doesn't you can run into problems, the "+" operator in Javascript being a common example.
1 + 1 -> 2
'1' + '1' -> '11'
See "Programming Considered as a Human Activity." EWD 117.
http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
Also See http://www.csee.umbc.edu/331/current/notes/01/01introduction.pdf
Human expression which:
describes mathematical functions
makes the computer turn switches on and off
This question is very broad. My favorite definition is that a programming language is a means of expressing computations
Precisely
At a high level
In ways we can reason about them
By computation I mean what Turing and Church meant: the Turing machine and the lambda calculus have equivalent expressive power (which is a theorem), and the Church-Turing hypothesis (which is a conjecture) says roughly that there's no more powerful notion of computation out there. In other words, the kinds of computations that can be expressed in any programming languages are at best the kinds that can be expressed using Turing machines or lambda-calculus programs—and some languages will be able to express only a subset of those calculations.
This definition of computation also encompasses your friendly neighborhood hardware, which is pretty easy to simulate using a Turing machine and even easier to simulate using the lambda calculus.
Expressing computations precisely means the computer can't wiggle out of its obligations: if we have a particular computation in mind, we can use a programming language to force the computer to perform that computation. (Languages with "implementation defined" or "undefined" constructs make this task more difficult. Programmers using these languages are often willing to settle for—or may be unknowingly settling for—some computation that is only closely related to the computation they had in mind.)
Expressing computation at a high level is what programming langauges are all about. An important reason that there are so many different programming languages out there is that there are so many different high-level ways of thinking about problems. Often, if you have an important new class of problems to solve, you may be best off creating a new programming language. For example, Larry Wall's writing suggests that solving a class of problems called "systems administration" was a motivation for him to create Perl.
(Another reason there are so many different programming languages out there is that creating a new language is a lot of fun, and anyone can learn to do it.)
Finally, many programmers want languages that make it easy to reason about programs. For example, today a student of mine implemented a new algorithm that made his program run over six times faster. He had to reason very carefully about the contents of C arrays to make sure that the new algorithm would do the same job the old one did. Luckily C has decent tools for reasoning about programs, for example:
A change in a[i] cannot affect the value of a[i-1].
My student also applied a reasoning principle that isn't valid in C:
The sum of of a sequence unsigned integers will be at least as large as any integer in the sequence.
This isn't true in C because the sum might overflow. One reason some programmers prefer languages like Standard ML is that in SML, this reasoning principle is always valid. Of languages in wide use, probably Haskell has the strongest reasoning principles Richard Bird has developed equational reasoning about programs to a high art.
I will not attempt to address all the tangential details that follow your opening question. But I hope you will get something out of an answer that aims to give a deeper understanding, as you asked, of a fundamental question about programming languages.
One thing a lot of "IT" types forget is that there are 2 types of computer programming languages:
Software programming languages: C, Java, Perl, COBAL, etc.
Hardware programming languages: VHDL, Verilog, System Verilog, etc.
Interesting.
I'd say the defining feature of a programming language is the ability to make decisions based on input. Effectively, if and goto. Everything else is lots and lots of syntactic sugar. This is the idea that spawned Brainfuck, which is actually remarkably fun to (try to) use.
There are places where the line blurs; for example, I doubt people would consider XSLT to really be a programming language, but it's Turing-complete. I've even solved a Project Euler problem with it. (Very, very slowly.)
Three main properties of languages come to mind:
How is it run? Is it compiled to bare metal (C), compiled to mostly bare metal with some runtime lookup (C++), run on a JIT virtual machine (Java, .NET), bytecode-interpreted (Perl), or purely interpreted (uhh..)? This doesn't comment much on the language itself, but speaks to how portable the code may be, what sort of speed I might expect (and thus what broad classes of tasks would work well), and sometimes how flexible the language is.
What paradigms does it support? Procedural? Functional? Is the standard library built with classes or functions? Is there reflection? Is there, ideally, support for pretty much whatever I want to do?
How can I represent my data? Are there arrays, and are they fixed-size or not? How easy is it to use strings? Are there structs or hashes built in? What's the type system like? Are there objects? Are they class-based or prototype-based? Is everything an object, or are there primitives? Can I inherit from built-in objects?
I realize the last one is a very large collection of potential questions, but it's all related in my mind.
I imagine rebuilding the programming language landscape entirely from scratch would work pretty much how it did the first time: iteratively. Start with assembly, the list of direct commands the processor understands, and wrap it with something a bit easier to use. Repeat until you're happy.
Yes, you can write a Javascript interpreter in Javascript, or a Python interpreter in Python (see: PyPy), or a Python interpreter in Javascript. Such languages are called self-hosting. Have a look at Perl 6; this has been a goal for its main implementation from the start.
Ultimately, everything just has to translate to machine code, not necessarily C. You can write D or Fortran or Haskell or Lisp if you want. C just happens to be an old standard. And if you write a compiler for language Foo that can ultimately spit out machine code, by whatever means, then you can rewrite that compiler in Foo and skip the middleman. Of course, if your language is purely interpreted, this will probably result in a stack overflow...
As a friend taught me about computer languages, a language is a world. A world of communication with that machine. It is world for implementing ideas, algorithms, functionality, as Alonzo and Alan described. It is the technical equivalent of the mathematical structures that the aforementioned scientists built. It is a language with epxressions and also limits. However, as Ludwig Wittgenstein said "The limits of my language mean the limits of my world", there are always limitations and that's how one chooses it's language that fits better his needs.
It is a generic answer... some thoughts actually and less an answer.
There are many definitions to this but what I prefer is:
Computer programming is programming that helps to solve a particular technical task/problem.
There are 3 key phrases to look out for:
You: Computer will do what you (Programmer) told it to do.
Instruct: Instruction is given to the computer in a language that it can understand. We will discuss that below.
Problem: At the end of the day computers are tools (Complex). They are there to make out life simpler.
The answer can be lengthy but you can find more about computer programming

Why create a new programming language? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the real benefit of creating a new programming language? It is highly unlikely that you are going to actually use it.
In short, how will the process of creating a new language make you a better programmer?
You will understand the decisions behind language design and garner a better overall understanding of the compromises made between readability, performance, and reliability.
Your familiarity with concepts such as recursion, closures, garbage collection, reference management, typing, data structures and how these things actually work will increase. Most programmers will utilize resources and language features better.
Similar to the way we learn new ways to code solutions when we use other languages, when we write our own languages, we explore new ways to create solutions. See Metaprogramming. Contrary to the what the question suggests, Domain Specific Languages are used in many environments.
If you're writing a compiler, you'll learn more about how computers work than you ever did before. (Depending on your goal, perhaps more than you intended to learn)
When I wrote my own sort routines in school, even re-implementations of good ones, it really drove home some of the weaknesses of some of the algorithms.
In short, there's an order of magnitude of difference in a programmer who knows how to use tools, and a programmer who knows how to make tools.
I can speak from experience here ...
Fun, Domain specific problem solving, Complexity in context
I love creating new languages for fun, and for tackling domain specific problems. A very simple example might be Wikipedia markup or something as complex as Erlang which specializes in concurrent processing.
Many general purpose languages are similar, because they are general purpose. Sometimes you need a more accurate abstraction of the mechanics of the problem you are solving. Another example would be the M4 macro language.
Remember a language is not magic, it is just a collection of defined grammatical structures with implied semantics. SQL is a good example of a language for a purpose, with that purpose defined in it's syntax and semantics.
Learning how languages work, what makes a language parsable, what makes semantics sensible and the implementation of this, I think can make you a better programmer.
compilers embody alot of theory that underpins computer science:
Translation, abstraction, interpretation, data structures, state .... the list goes on. Learning these things will make you understand the implications of your program and what goes on under the hood. You can of course learn things independently but compilers are a great context to learn complex topics such as DFA/NDFA automata, stack-based parsers, abstract syntax trees ....
compilers are beautiful machines I think :)
Multiple reasons:
bragging rights
economic incentives
extreme boredom
dissatisfaction with the hundreds of existing languages
untreated insanity
desire to implement language that facilitates new design concepts (like languages that make design patterns more straightforward to incorporate)
other reasons, perhaps
I think Jeff Attwood answers this well in this Coding Horror post -- though he's talking about a more general issue (why create any new library, framework, etc, when other artifacts in the same design space already exist), I suspect that exactly said broader viewpoint gives him a different and interesting perspective.
I will add that if you write a semantics, so that your language is an actual language and not merely what happens to be accepted by some particular implementation, you will learn an enormous amount about how to describe computational behaviors precisely:
You will learn what kinds of behaviors are and are not easy to describe—and prove correct.
You will learn how to trade off different kinds of formalisms for describing different kinds of features.
You will ultimately be a better programmer because the formalism and proof techniques you will learn will apply to all kinds of problems: locking techniques, safety properties in kernels, lock-free data structures, network protocols, and information security, to name just a few. All these areas are amenable to the same kind of formal treatment that is given to a programming language.
To pick just one example, if you give your language a static type system and you then prove that a well-type program is guaranteed to be memory-safe, you will learn just as much (on a different dimension) as you will by writing an interpreter or compiler.
EDIT: If you want to learn this stuff I think the easiest starting point is Benjamin Pierce's series of two books on Types and Programming Languages. There is also a graduate textbook by Glynn Winskel which is a little harder but more oriented toward semantics and proof techniques.
Creating Domain Specific Languages is very valuable. Instead of thinking only about general purpose languages, consider creating so-called "little languages" that clearly express abstractions in your project.
For example, in a recent project I decided to use a Command Pattern to drive a Service Layer. I found some repetition in my command code, so I wrote a little compiler that accepts a simple language that expresses commands and emits command implementations in the "underlying" language.
For the same reason that taking a Compiler Construction course at university will benefit you even if you never write a single compiler in your whole life. It's a look under the hood, if you may.
In addition to what altCognito said, which is a theoretical/academic perspective, some highly specialized languages are created to solve specific problems efficiently when existing "general-purpose" languages are either extremely inefficient for your task or there just isn't an easy-to-use existing alternative.
Granted, that such cases tend to be rare and if your first instinct on encountering a problem is "I need a new language for this.", then it is most likely you're missing something. There needs to be a fairly substantial gap in "available" tech and and your needs to warrant such an undertaking.
I think there are really two conceptually different answers to this. First, you gain an understanding of how compilers transform your code into executable code. This can help you make better decisions about how to structure your code to optimize (or allow it to be optimized) better. If, for instance, you knew that a certain construct would prohibit the compiler from inlining a code block or unrolling a loop, then you could avoid that if performance became a real concern.
Second, all current languages were invented (or derived) at some point in history. For each one of these, the likelihood that it would actually be used was potentially small, yet here they are. They all found their reason for being in the fact that someone wanted to do something that wasn't possible or easy to do in an existing language and decided to do something about it. Laziness (or the desire to let the computer do the work for you) is the mother of invention.
Just for fun... and then you'll realize that you cannot make anything better than all the languages that you thought they sucked xD (so you stop complaining about them).
how will the process of creating a new language make you a better programmer?
You're right, you may or may not use the language, but at the least the experience you will gain from doing it will benefit you to understand the implementation of programming languages and of certain things that you will be able to apply to future computation problems that you run into.
Writing a compiler or interpreter requires a very firm understanding in computer science theory. And if you're compiling to machine code instead of to another language, it requires a firm understanding in hardware design as well.
In addition to that, knowing how to design a compiler means you will have a better understanding of languages in general, and the languages you work with specifically. You will have a better appreciation for syntax and trade-offs the language designers took when they wrote their specification.
It's not that writing compilers makes you a better programmer. It's the deep understanding of language theory and compiler design that makes you better.
Mostly you do this for fun or to broaden your comprehension of a subject.
I disagree that creating new language influences performance - performance of what? IMHO execution speed should not depend on the language constructs but what the language is translated to - which is something different: like creating a syntax for a language and writting a compiler/virtual machine for it.
Because a talking frog is pretty neat.
I want a managed language that permits tinkering with its internals as standard practice. Kind of like Ruby's duck punching on a wider scale.
I should, as the client of a library, be able to swap out library functions that don't do what I want.
That's what drives me crazy with .NET. There are bugs in the framework Microsoft will not fix and thanks to GAC signing I cannot. And even if it were not for GAC signing, hotpatching a global library is a bad idea (might break some other application).
I for one don't care about how compilers work, don't care about learning new languages, and don't care about using scripting languages like perl and javascript. I'm much more interested in the ways big programs are constructed (or should be constructed). There are still no good solutions for making LARGE software as easy to use as prototyped code. Programming languages are not helping with that. They solve trivial problems like sorting and memory deallocation, and leave you struggling alone with problems that really matter (that keep you or your firm from losing money).

What languages implement features from functional programming?

Lisp developed a set of interesting language features quite early on in the academic world, but most of them never caught on in production environments.
Some languages, like JavaScript, adapted basic features like garbage collection and lexical closures, but all the stuff that might actually change how you write programs on a large scale, like powerful macros, the code-as-data thing and custom control structures, only seems to propagate within other functional languages, none of which are practical to use for non-trivial projects.
The functional programming community also came up with a lot of other interesting ideas (apart from functional programming itself), like referential transparency, generalised case-expressions (ie, pattern-matching, not crippled like C/C# switches) and curried functions, which seem obviously useful in regular programming and should be easy to integrate with existing programming practice, but for some reason seem to be stuck in the academic world forever.
Why do these features have such a hard time getting adopted? Are there any modern, practical languages that actually learn from Lisp instead of half-assedly copying "first class functions", or is there an inherent conflict that makes this impossible?
Are there any modern, practical
languages that actually learn from
Lisp instead of half-assedly copying
"first class functions", or is there
an inherent conflict that makes this
impossible?
Why aren't lisp, haskell, ocaml, or f# modern?
You might just need to take it on yourself and look at them and realize that they are more robust, with libraries like java, then you'd think.
A lot of features have been adopted from functional languages to other languages. But vice versa -- (some) functional languages have objects, for example.
I suggest you try Clojure. Syntactically beautiful dialect, functional (in the ML sense), and fast. You get immutability, software transactional memory, multiversion concurrency control, a REPL, SLIME support, and an inexhaustible FFI. It's the Lisp (& Haskell) for the Business Programmer. I'm having a great time using it daily in my real job.
There is no known correlation between a language "catching on" and whether or not is has powerful, well researched, well designed features.
A lot has been said on the subject. It exists all over the place in technology, and also the arts. We know artist A has more training and produces works of greater breadth and depth than artist B, yet artist B is far more successful in the marketplace. Is it because there's a zeitgeist? Is is because artist B has better marketing? Is it because most people won't take the time to understand artist A? Maybe artist B is secretly awful and we should mistrust experts who make judgements about artists? Probably all of the above, to some degree or another.
This drives people who study the arts, and people who study programming languages, crazy.
Scala is a cool functional/OO language with pattern matching, first class functions, and the like. It has the advantage of compiling to Java bytecode and inter-operates well with Java code.
Common Lisp, used in the real-world albeit not wildely so, I guess.
Python or Ruby. See Paul Graham's thoughts on this in the question "I like Lisp but my company won't let me use it. What should I do?".
Scala is the absolute king of languages which have adopted significant academic features. Higher kinds, self types, polymorphic pattern matching, etc. All of these are bleeding-edge (or near to it) academic research topics that have been incorporated into Scala as fundamental features. Arguably, this has been to the detriment of the langauge's simplicity, but it does lead to some very interesting patterns.
C# is more mainstream than Scala, but it also has adopted fewer of these "out-there" functional features. LINQ is a limited implementation for Wadler's generalized list comprehensions, and everyone knows about lambdas. But for all that, C# (rightfully) remains a bit conservative in adopting research features from the academic world.
Erlang has recently gained renewed exposure not only through being used by Twitter, but also by the rise of XMPP driven messaging and implementations such as ejabberd. It sports many of the ideas coming from functional programming being a language designed with that in mind. Initially used to run Telephone switches and conceived by Ericson to run the first GSM networks. It is still around, it is fully functional (as a language) and used in many production environments.
Lua.
It's used as a scripting/extension language for a number of games (like World of Worcraft), and applications (Snort, NMAP, Wireshark, etc). In fact, according to an Adobe developer, Adobe's Lightroom is over 40% Lua.
The guys behind Lua have repeatedly listed Scheme and Lisp as major influences on Lua, and Lua has even been described as Scheme without the parentheses.
Have you checked out F#
Lot's of dynamic programming languages implement ideas from functional programming. The newer .Net languages (C# and VB) have what they call lambda's but these aren't side effect free.
It's not difficult combining concepts from functional programming and object oriented programming for example but it doesn't always make a lot of sense. Object oriented languages (try to) encapsulate state inside objects while functional languages encapsulate state inside functions. If you combine objects and functions in one language it gets harder to make sense of all this.
There have been a lot of languages that have combined these paradigms by just throwing them together (F#) and this can be usefull but I think we still need a couple of decades of playing with languages like this untill we can create a new paradigm that succesfully will combine the ideas from oo and functional programming.
C# 3.0 definitely does.
C# now has
Lambda Expressions
Higher Order Functions
Map / Reduce + Filter ( Folding?) to lists and all types which implement IEnumerable.
LINQ
Object + Collection Initializers.
The last two list items may not fall under proper functional programming, anyways the answer is C# has implemented many useful concepts from Lisp etc.
In addition to what was said, a lot of LISP goodness is based on guaranteed lack of side-effects and using built-in data structures. Both rarely hold in real world. ML is probably better functional base.
Lisp developed a set of interesting language features quite early on in the academic
world, but most of them never caught on in production environments.
Because the kind of people who manage software developers aren't the kinds of people who you can have an interesting chat comparing different language features with. Around 2000, I wanted to use LISP to implement XML-to-HTML transforms on our corporate website (this is around the time of Amazon implementing their backend in LISP). I didn't get to. This is mildly ironic seeing as the company I was working for made and sold a Common LISP environment.
Another "real-world" language that implements functional programming features is Javascript. Since absolutely everything has a value, then high-order functions are easily implemented. You also have other tenants of functional programming such as lambda functions, closures, and currying.
The features you refer to ("powerful" macros, the code-as-data thing and custom control structures) have not propagated within other functional languages. They died after Lisp taught us that they are a bad idea.
Modern functional languages (OCaml, Haskell, Erlang, Scala, F#, C# 3.0, JavaScript) do not have those features.
Cheers,
Jon Harrop.

Why functional languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I see a lot of talk on here about functional languages and stuff. Why would you use one over a "traditional" language? What do they do better? What are they worse at? What's the ideal functional programming application?
Functional languages use a different paradigm than imperative and object-oriented languages. They use side-effect-free functions as a basic building block in the language. This enables lots of things and makes a lot of things more difficult (or in most cases different from what people are used to).
One of the biggest advantages with functional programming is that the order of execution of side-effect-free functions is not important. For example, in Erlang this is used to enable concurrency in a very transparent way.
And because functions in functional languages behave very similar to mathematical functions it's easy to translate those into functional languages. In some cases, this can make code more readable.
Traditionally, one of the big disadvantages of functional programming was also the lack of side effects. It's very difficult to write useful software without I/O, but I/O is hard to implement without side effects in functions. So most people never got more out of functional programming than calculating a single output from a single input. In modern mixed-paradigm languages like F# or Scala this is easier.
Lots of modern languages have elements from functional programming languages. C# 3.0 has a lot functional programming features and you can do functional programming in Python too. I think the reasons for the popularity of functional programming is mostly because of two reasons: Concurrency is getting to be a real problem in normal programming, because we're getting more and more multiprocessor computers; and the languages are getting more accessible.
I don't think that there's any question about the functional approach to programming "catching on", because it's been in use (as a style of programming) for about 40 years. Whenever an OO programmer writes clean code that favors immutable objects, that code is borrowing functional concepts.
However, languages that enforce a functional style are getting lots of virtual ink these days, and whether those languages will become dominant in the future is an open question. My own suspicion is that hybrid, multi-paradigm languages such as Scala or OCaml
will likely dominate over "purist" functional languages in the same way that pure OO language (Smalltalk, Beta, etc.) have influenced mainstream programming but haven't ended up as the most widely-used notations.
Finally, I can't resist pointing out that your comments re FP are highly parallel to the remarks I heard from procedural programmers not that many years ago:
The (mythical, IMHO) "average" programmer doesn't understand it.
It's not widely taught.
Any program you can write with it can be written another way with current techniques.
Just as graphical user interfaces and "code as a model of the business" were concepts that helped OO become more widely appreciated, I believe that increased use of immutability and simpler (massive) parallelism will help more programmers see the benefits that the functional approach offers. But as much as we've learned in the past 50 or so years that make up the entire history of digital computer programming, I think we still have much to learn. Twenty years from now, programmers will look back in amazement at the primitive nature of the tools we're currently using, including the now-popular OO and FP languages.
The main plus for me is its inherent parallelism, especially as we are now moving away from higher CPU clock frequency and towards more and more cores.
I don't think it will become the next programming paradigm and completely replace OO type methods, but I do think we will get to the point that we need to either write some of our code in a functional language, or our general purpose languages will grow to include more functional constructs.
Even if you never work in a functional language professionally, understanding functional programming will make you a better developer. It will give you a new perspective on your code and programming in general.
I say there's no reason to not learn it.
I think the languages that do a good job of mixing functional and imperative style are the most interesting and are the most likely to succeed.
I'm always skeptical about the Next Big Thing. Lots of times the Next Big Thing is pure accident of history, being there in the right place at the right time no matter whether the technology is good or not. Examples: C++, Tcl/Tk, Perl. All flawed technologies, all wildly successful because they were perceived either to solve the problems of the day or to be nearly identical to entrenched standards, or both. Functional programming may indeed be great, but that doesn't mean it will be adopted.
But I can tell you why people are excited about functional programming: many, many programmers have had a kind of "conversion experience" in which they discover that using a functional language makes them twice as productive (or maybe ten times as productive) while producing code that is more resilient to change and has fewer bugs. These people think of functional programming as a secret weapon; a good example of this mindset is Paul Graham's Beating the Averages. Oh, and his application? E-commerce web apps.
Since early 2006 there has also been some buzz about functional programming and parallelism. Since people like Simon Peyton Jones have been worrying about parallelism off and on since at least 1984, I'm not holding my breath until functional languages solve the multicore problem. But it does explain some of the additional buzz right about now.
In general, American universities are doing a poor job teaching functional programming. There's a strong core of support for teaching intro programming using Scheme, and Haskell also enjoys some support there, but there's very little in the way of teaching advanced technique for functional programmer. I've taught such a course at Harvard and will do so again this spring at Tufts. Benjamin Pierce has taught such a course at Penn. I don't know if Paul Hudak has done anything at Yale. The European universities are doing a much better job; for example, functional programming is emphasized in important places in Denmark, the Netherlands, Sweden, and the UK. I have less of a sense of what's happening in Australasia.
I don't see anyone mentioning the elephant in the room here, so I think it's up to me :)
JavaScript is a functional language. As more and more people do more advanced things with JS, especially leveraging the finer points of jQuery, Dojo, and other frameworks, FP will be introduced by the web-developer's back-door.
In conjunction with closures, FP makes JS code really light, yet still readable.
Cheers,
PS
Most applications are simple enough to be solved in normal OO ways
OO ways have not always been "normal." This decade's standard was last decade's marginalized concept.
Functional programming is math. Paul Graham on Lisp (replace Lisp by functional programming):
So the short explanation of why this
1950s language is not obsolete is that
it was not technology but math, and
math doesn’t get stale. The right
thing to compare Lisp to is not 1950s
hardware, but, say, the Quicksort
algorithm, which was discovered in
1960 and is still the fastest
general-purpose sort.
I bet you didn't know you were functional programming when you used:
Excel formulas
Quartz Composer
JavaScript
Logo (Turtle graphics)
LINQ
SQL
Underscore.js (or Lodash),
D3
The average corporate programmer, e.g.
most of the people I work with, will
not understand it and most work
environments will not let you program
in it
That one is just a matter of time though. Your average corporate programmer learns whatever the current Big Thing is. 15 years ago, they didn't understand OOP.
If functional programming catches on, your "average corporate programmers" will follow.
It's not really taught at universities
(or is it nowadays?)
It varies a lot. At my university, SML is the very first language students are introduced to.
I believe MIT teaches Lisp as a first-year course. These two examples may not be representative, of course, but I believe most universities at the very least offer some optional courses on functional programming, even if they don't make it a mandatory part of the curriculum.
Most applications are simple enough to
be solved in normal OO ways
It's not really a matter of "simple enough" though. Would a solution be simpler (or more readable, robust, elegant, performant) in functional programming? Many things are "simple enough to be solved in Java", but it still requires a godawful amount of code.
In any case, keep in mind that functional programming proponents have claimed that it was the Next Big Thing for several decades now. Perhaps they're right, but keep in mind that they weren't when they made the same claim 5, 10 or 15 years ago.
One thing that definitely counts in their favor, though, is that recently, C# has taken a sharp turn towards functional programming, to the extent that it's practically turning a generation of programmers into functional programming programmers, without them even noticing. That might just pave the way for the functional programming "revolution". Maybe. ;)
Man cannot understand the perfection and imperfections of his chosen art if he cannot see the value in other arts. Following rules only permits development up to a point in technique and then the student and artist has to learn more and seek further. It makes sense to study other arts as well as those of strategy.
Who has not learned something more about themselves by watching the activities of others? To learn the sword study the guitar. To learn the fist study commerce. To just study the sword will make you narrow-minded and will not permit you to grow outward.
-- Miyamoto Musashi, "A Book of Five Rings"
One key feature in a functional language is the concept of first-class functions. The idea is that you can pass functions as parameters to other functions and return them as values.
Functional programming involves writing code that does not change state. The primary reason for doing so is so that successive calls to a function will yield the same result. You can write functional code in any language that supports first-class functions, but there are some languages, like Haskell, which do not allow you to change state. In fact, you're not supposed to make any side effects (like printing out text) at all - which sounds like it could be completely useless.
Haskell instead employs a different approach to I/O: monads. These are objects that contain the desired I/O operation to be executed by your interpreter's toplevel. At any other level they are simply objects in the system.
What advantages does functional programming provide? Functional programming allows coding with fewer potentials for bugs because each component is completely isolated. Also, using recursion and first-class functions allows for simple proofs of correctness which typically mirror the structure of the code.
I don't think most realistic people think that functional programming will catch on (becomes the main paradigm like OO). After all, most business problems are not pretty math problems but hairy imperative rules to move data around and display them in various ways, which means it's not a good fit for pure functional programming paradigm (the learning curve of monad far exceeds OO.)
OTOH, functional programming is what makes programming fun. It makes you appreciate the inherent, timeless beauty of succinct expressions of the underlying math of the universe. People say that learning functional programming will make you a better programmer. This is of course highly subjective. I personally don't think that's completely true either.
It makes you a better sentient being.
I'd point out that everything you've said about functional languages, most people were saying about object-oriented langauges about 20 years ago. Back then it was very common to hear about OO:
* The average corporate programmer, e.g. most of the people I work with, will not understand it and most work environments will not let you program in it
* It's not really taught at universities (or is it nowadays?)
* Most applications are simple enough to be solved in normal IMPERATIVE ways
Change has to come from somewhere. A meaningful and important change will make itself happen regardless of whether people trained in earlier technologies take the opinion that change isn't necessary. Do you think the change to OO was good despite all the people that were against it at the time?
I must be dense, but I still don't get it. Are there any actual examples of small application's written in a functional language like F# where you can look at the source code and see how and why it was better to use such an approach than, say, C#?
F# could catch on because Microsoft is pushing it.
Pro:
F# is going to be part of next version of Visual Studio
Microsoft is building community for some time now - evangelists, books, consultants that work with high profile customers, significant exposure at MS conferences.
F# is first class .NET language and it's the first functional language that comes with really big foundation (not that I say that Lisp, Haskell, Erlang, Scala, OCaml do not have lots of libraries, they are just not as complete as .NET is)
Strong support for parallelism
Contra:
F# is very hard to start even if you are good with C# and .NET - at least for me :(
it will probably be hard to find good F# developers
So, I give 50:50 chance to F# to become important. Other functional languages are not going to make it in near future.
I think one reason is that some people feel that the most important part of whether a language will be accepted is how good the language is. Unfortunately, things are rarely so simple. For example, I would argue that the biggest factor behind Python's acceptance isn't the language itself (although that is pretty important). The biggest reason why Python is so popular is its huge standard library and the even bigger community of third-party libraries.
Languages like Clojure or F# may be the exception to the rule on this considering that they're built upon the JVM/CLR. As a result, I don't have an answer for them.
It seems to me that those people who never learned Lisp or Scheme as an undergraduate are now discovering it. As with a lot of things in this field there is a tendency to hype and create high expectations...
It will pass.
Functional programming is great. However, it will not take over the world. C, C++, Java, C#, etc will still be around.
What will come of this I think is more cross-language ability - for example implementing things in a functional language and then giving access to that stuff in other languages.
When reading "The Next Mainstream Programming Language: A Game Developer’s Perspective" by Tim Sweeney, Epic Games, my first thought was - I got to learn Haskell.
PPT
Google's HTML Version
Most applications can be solved in [insert your favorite language, paradigm, etc. here].
Although, this is true, different tools can be used to solve different problems. Functional just allows another high (higher?) level abstraction that allows to do our jobs more effectively when used correctly.
Things have been moving in a functional direction for a while. The two cool new kids of the past few years, Ruby and Python, are both radically closer to functional languages than what came before them — so much so that some Lispers have started supporting one or the other as "close enough."
And with the massively parallel hardware putting evolutionary pressure on everyone — and functional languages in the best place to deal with the changes — it's not as far a leap as it once was to think that Haskell or F# will be the next big thing.
It's catching on because it's the best tool around for controlling complexity.
See:
- slides 109-116 of Simon Peyton-Jones talk "A Taste of Haskell"
- "The Next Mainstream Programming Language: A Game Developer's Perspective" by Tim Sweeney
Check out Why Functional Programming Matters.
Have you been following the evolution of programming languages lately? Every new release of all mainstream programming languages seems to borrow more and more features from functional programming.
Closures, anonymous functions, passing and returning functions as values used to be exotic features known only to Lisp and ML hackers. But gradually, C#, Delphi, Python, Perl, JavaScript, have added support for closures. It's not possible for any up-and-coming language to be taken seriously without closures.
Several languages, notably Python, C#, and Ruby have native support for list comprehensions and list generators.
pioneered generic programming in 1973, but support for generics ("parametric polymorphism") has only become an industry standard in the last 5 years or so. If I remember correctly, Fortran supported generics in 2003, followed by Java 2004, C# in 2005, Delphi in 2008. (I know C++ has supported templates since 1979, but 90% of discussions on C++'s STL start with "here there be demons".)
What makes these features appealing to programmers? It should be plainly obvious: it helps programmers write shorter code. All languages in the future are going to support—at a minimum—closures if they want to stay competitive. In this respect, functional programming is already in the mainstream.
Most applications are simple enough to
be solved in normal OO ways
Who says can't use functional programming for simple things too? Not every functional program needs to be a compiler, theorem prover, or massively parallel telecommunications switch. I regularly use F# for ad hoc throwaway scripts in addition to my more complicated projects.
Wow - this is an interesting discussion. My own thoughts on this:
FP makes some tasks relatively simple (compared to none-FP languages).
None-FP languages are already starting to take ideas from FP, so I suspect that this trend will continue and we will see more of a merge which should help people make the leap to FP easier.
I don't know whether it will catch on or not, but from my investigations, a functional language is almost certainly worth learning, and will make you a better programmer. Just understanding referential transparency makes a lot of design decisions so much easier- and the resulting programs much easier to reason about. Basically, if you run into a problem, then it tends to only be a problem with the output of a single function, rather than a problem with an inconsistant state, which could have been caused by any of the hundreds of classes/methods/functions in an imparative language with side effects.
The stateless nature of FP maps more naturally to the stateless nature of the web, and thus functional languages lend themselves more easily to more elegant, RESTFUL webapps. Contrast with JAVA and .NET frameworks that need to resort to horribly ugly HACKS like VIEWSTATE and SESSION keys to maintain application state, and maintain the (occasionally quite leaky) abstraction of a stateful imperative language, on an essentially stateless functional platform like the web.
And also, the more stateless your application, the more easily it can lend itself to parallel processing. Terribly important for the web, if your website happens to get popular. It's not always straightforward to just add more hardware to a site to get better performance.
My view is that it will catch on now that Microsoft have pushed it much further into the mainstream. For me it's attractive because of what it can do for us, because it's a new challenge and because of the job opportunities it resents for the future.
Once mastered it will be another tool to further help make us more productive as programmers.
A point missed in the discussion is that the best type systems are found in contemporary FP languages. What's more, compilers can infer all (or at least most) types automatically.
It is interesting that one spends half the time writing type names when programming Java, yet Java is by far not type safe. While you may never write types in a Haskell programm (except as a kind of compiler checked documentation) and the code is 100% type safe.
I agree with the first point, but times change. Corporations will respond, even if they're late adopters, if they see that there's an advantage to be had. Life is dynamic.
They were teaching Haskell and ML at Stanford in the late 1990s. I'm sure that places like Carnegie Mellon, MIT, Stanford, and other good schools are presenting it to students.
I agree that most "expose relational databases on the web" applications will continue in that vein for a long time. Java EE, .NET, Ruby on Rails, and PHP have evolved some pretty good solutions to that problem.
You've hit on something important: It might be the problem that can't be solved easily by other means that will boost functional programming. What would that be?
Will massive multicore hardware and cloud computing push them along?
Because functional programming has significant benefits in terms of productivity, reliability and maintainability. Many-core may be a killer application that finally gets big corporations to switch over despite large volumes of legacy code. Furthermore, even big commercial languages like C# are taking on a distinct functional flavour as a result of many-core concerns. Side effects simply don't fit well with concurrency and parallelism.
I do not agree that "normal" programmers won't understand it. They will, just like they eventually understood OOP (which is just as mysterious and weird, if not more so).
Also, most universities do teach functional programming , many even teach it as the first programming course.
In addition to the other answers, casting the solution in pure functional terms forces one to understand the problem better. Conversely, thinking in a functional style will develop better* problem solving skills.
*Either because the functional paradigm is better or because it will afford an additional angle of attack.

Resources