Design patterns for building loosely-coupled systems in dynamic/scripting languages - node.js

I have lots of experience building enterprise apps using Java/C# and have become accustomed to all the trappings that come with object-oriented, statically typed languages. Specifically, I've become quite adept at dealing with system complexity by using the standard tools of the trade:
interfaces/abstract types
object composition
dependency inversion
I'm being asked to engineer a fairly complex back-end message processing system using a dynamic, functional language (Lua). Functional languages are all the rage these days (NodeJs, JavaScript etc), so I'm happy to use this as an opportunity to jump on said bandwagon.
Can anyone suggest a sample application or architecture I can use to learn about using things such as first class functions, closures, currying to build a complex, loosely coupled system?
Many thanks!

I will suggest looking on the libs/frameworks below, they are really well designed,
keep in mind that javascript and lua are very similar, just replace objects with
tables add coroutines and "nice" syntax and you have Lua.
Lua
Luvit node.js in Lua.
node.js
Express micro web framework.
Mocha Unit testing framework.

I've done a quite a bit of research on "design patterns" that can be applied in dynamic languages with first class function support, and here are my findings.
Currying == Dependency Injection. Currying allows you to take a function and repackage it as a new function with one or more of its parameter values already assigned. This is very similar to an IoC container instantiating a class "bootstrapped" with all it's dependencies and ready for consumption by clients.
First Class Functions == Command pattern. Since first class functions can be passed around like values, you basically get the Command pattern for free and without the overhead.
References:
First Class Functions == Command pattern
Functional Dependency Injection via Currying

Related

Elixir and Haskell interoperability

As a platform for handling concurrent problems, Elixir/OTP seems to be the best suited solution.
When writing an application with a web interface, consider the case in which I want to reason about, and decouple, the application logic using another functional language - namely haskell (due to benefits like its advanced detection of errors at compile time, static typing, etc). I would then handle concurrency using GenServers, and attach a web interface using Phoenix.Channels.
Is this setup even possible using NIFs? Also, would true concurrency be maintained? I'm not sure that I'm following the correct line of reasoning here, but would a new haskell process be able to be spawned in line with GenServer demands, and would the two be able communicate efficiently?
This setup is certainly possible using NIFs and GHC's FFI with a small amount of boilerplate written in C. But NIFs are best used for short synchronous computations with no side effects and I get the feeling that that isn't what these operations are.
You'd probably be better off with C Nodes for the Haskell parts of the application. Most of the documentation you'll find for that will be for Erlang and not Elixir, but given Elixir's easy interop with Erlang, it should be pretty straight forward (someone's even written an example). Most of the hard work will be to do with writing a Haskell "C Node", for which a cursory glance at hackage and github turns up nothing.

What's the status of current Functional Reactive Programming implementations?

I'm trying to visualize some simple automatic physical systems (such things as pendulum, robot arms,etc.) in Haskell.
Often those systems can be described by equations like
df/dt = c*f(t) + u(t)
where u(t) represents some kind of 'intelligent control'. Those systems look to fit very nicely in the Functional Reactive Programming paradigm.
So I grabbed the book "The Haskell School of Expression" by Paul Hudak,
and found that the domain specific language "FAL" (for Functional Animation Language) presented there actually works quite pleasently for my simple toy systems (although some functions, notably integrate, seemed to be a bit too lazy for an efficient use, but easily fixable).
My question is, what's the more mature, up-to-date, well-maintained, performance-tuned alternative for more advanced, or even practical applications today?
This wiki page lists several options for Haskell, but I'm not clear about the following respects:
The status of "reactive", the project from Conal Eliott who is (as I understand it) one of the inventers of this programming paradigm, looks a bit stale. I love his code, but maybe I should try other more up-to-date alternatives? What's the primary difference between them, in terms of syntax/performance/runtime-stability?
To quote from a survey in 2011, Section 6, "... FRP implementations are still not efficient enough or predictable enough in performance to be used effectively in domains which require latency guarantees ...". Alghough the survey suggests some interesting possible optimizations, given the fact that FRP is there for more than 15 years, I get the impression that this performance problem might be something very or even inherently difficult to solve at least within a few years. Is this true?
The same author of the survey talks about "time leaks" in his blog. Is the problem unique to FRP, or something we are generally having when programming in a pure, non-strict language? Have you ever found it just too difficult to stabilize an FRP-based system over time, if not performant enough?
Is this still a research level project? Are the people like plant engineers, robotics engineers, financial engineers, etc. actually using them (in whaterver language that suits their needs)?
Although I personally prefer a Haskell implementation, I'm open to other suggestions. For example, it would be particularly fun to have an Erlang implementation --- it would then be very easy to have an intelligent, adaptive, self-learning server process!
Right now there are mainly two practical Haskell libraries out there for functional reactive programming. Both are maintained by single persons, but are receiving code contributions from other Haskell programmers as well:
Netwire focusses on efficiency, flexibility and predictability. It has its own event paradigm and can be used in areas where traditional FRP does not work, including network services and complex simulations. Style: applicative and/or arrowized. Initial author and maintainer: Ertugrul Söylemez (this is me).
reactive-banana builds on the traditional FRP paradigm. While it is practical to use it also serves as ground for classic FRP research. Its main focus is on user interfaces and there is a ready-made interface to wx. Style: applicative. Initial author and maintainer: Heinrich Apfelmus.
You should try both of them, but depending on your application you will likely find one or the other to be a better fit.
For games, networking, robot control and simulations you will find Netwire to be useful. It comes with ready-made wires for those applications, including various useful differentials, integrals and lots of functionality for transparent event handling. For a tutorial visit the documentation of the Control.Wire module on the page I linked.
For graphical user interfaces currently your best choice is reactive-banana. It already has a wx interface (as a separate library reactive-banana-wx) and Heinrich blogs a lot about FRP in this context including code samples.
To answer your other questions: FRP isn't suitable in scenarios where you need real-time predictability. This is largely due to Haskell, but unfortunately FRP is difficult to realize in lower level languages. As soon as Haskell itself becomes real-time-ready, FRP will get there, too. Conceptually Netwire is ready for real-time applications.
Time leaks aren't really a problem anymore, because they are largely related to the monadic framework. Practical FRP implementations simply don't offer a monadic interface. Yampa has started this and Netwire and reactive-banana both build on that.
I know of no commercial or otherwise large scale projects using FRP right now. The libraries are ready, but I think the people aren't – yet.
Although there are some good answers already, I'm going to attempt to answer your specific questions.
reactive is not usable for serious projects, due to time leak problems. (see #3). The current library with the most similar design is reactive-banana, which was developed with reactive as an inspiration, and in discussion with Conal Elliott.
Although Haskell itself is inappropriate for hard real-time applications, it is possible to use Haskell for soft realtime applications in some cases. I'm not familiar with current research, but I don't believe this is an insurmountable problem. I suspect that either systems like Yampa, or code generation systems like Atom, are possibly the best approach to solving this.
A "time leak" is a problem specific to switchable FRP. The leak occurs when a system is unable to free old objects because it may need them if a switch were to occur at some point in the future. In addition to a memory leak (which can be quite severe), another consequence is that, when the switch occurs, the system must pause while the chain of old objects is traversed to generate current state.
Non-switchable frp libraries such as Yampa and older versions of reactive-banana don't suffer from time leaks. Switchable frp libraries generally employ one of two schemes: either they have a special "creation monad" in which FRP values are created, or they use an "aging" type parameter to limit the contexts in which switches can occur. elerea (and possibly netwire?) use the former, whereas recent reactive-banana and grapefruit use the latter.
By "switchable frp", I mean one which implements Conal's function switcher :: Behavior a -> Event (Behavior a) -> Behavior a, or identical semantics. This means that the shape of the network can dynamically switch as it's run.
This doesn't really contradict #ertes's statement about monadic interfaces: it turns out that providing a Monad instance for an Event makes time leaks possible, and with either of the above approaches it's no longer possible to define the equivalent Monad instances.
Finally, although there's still a lot of work remaining to be done with FRP, I think some of the newer platforms (reactive-banana, elerea, netwire) are stable and mature enough that you can build reliable code from them. But you may need to spend a lot of time learning the ins and outs in order to understand how to get good performance.
I'm going to list a couple of items in the Mono and .Net space and one from the Haskell space that I found not too long ago. I'll start with Haskell.
Elm - link
Its description as per its site:
Elm aims to make front-end web development more pleasant. It
introduces a new approach to GUI programming that corrects the
systemic problems of HTML, CSS, and JavaScript. Elm allows you to
quickly and easily work with visual layout, use the canvas, manage
complicated user input, and escape from callback hell.
It has its own variant of FRP. From playing with its examples it seems pretty mature.
Reactive Extensions - link
Description from its front page:
The Reactive Extensions (Rx) is a library for composing asynchronous
and event-based programs using observable sequences and LINQ-style
query operators. Using Rx, developers represent asynchronous data
streams with Observables, query asynchronous data streams using LINQ
operators, and parameterize the concurrency in the asynchronous data
streams using Schedulers. Simply put, Rx = Observables + LINQ +
Schedulers.
Reactive Extensions comes from MSFT and implements many excellent operators that simplify handling events. It was open sourced just a couple of days ago. It's very mature and used in production; in my opinion it would have been a nicer API for the Windows 8 APIs than the TPL-library provides; because observables can be both hot and cold and retried/merged etc, while tasks always represent hot or done computations that are either running, faulted or completed.
I've written server-side code using Rx for asynchronocity, but I must admit that writing functionally in C# can be a bit annoying. F# has a couple of wrappers, but it's been hard to track the API development, because the group is relatively closed and isn't promoted by MSFT like other projects are.
Its open sourcing came with the open sourcing of its IL-to-JS compiler, so it could probably work well with JavaScript or Elm.
You could probably bind F#/C#/JS/Haskell together very nicely using a message broker, like RabbitMQ and SocksJS.
Bling UI Toolkit - link
Description from its front page:
Bling is a C#-based library for easily programming images, animations,
interactions, and visualizations on Microsoft's WPF/.NET. Bling is
oriented towards design technologists, i.e., designers who sometimes
program, to aid in the rapid prototyping of rich UI design ideas.
Students, artists, researchers, and hobbyists will also find Bling
useful as a tool for quickly expressing ideas or visualizations.
Bling's APIs and constructs are optimized for the fast programming of
throw away code as opposed to the careful programming of production
code.
Complimentary LtU-article.
I've tested this, but not worked with it for a client project. It looks awesome, has nice C# operator overloading that form the bindings between values. It uses dependency properties in WPF/SL/(WinRT) as event sources. Its 3D animations work well on reasonable hardware. I would use this if I end up on a project in need for visualizations; probably porting it to Windows 8.
ReactiveUI - link
Paul Betts, previously at MSFT, now at Github, wrote that framework. I've worked with it pretty extensively and like the model. It's more decoupled than Blink (by its nature from using Rx and its abstractions) - making it easier to unit test code using it. The github git client for Windows is written in this.
Comments
The reactive model is performant enough for most performance-demanding applications. If you are thinking of hard real-time, I'd wager that most GC-languages have problems. Rx, ReactiveUI create some amount of small object that need to be GCed, because that's how subscriptions are created/disposed and intermediate values are progressed in the reactive "monad" of callbacks. In general on .Net I prefer reactive programming over task-based programming because callbacks are static (known at compile time, no allocation) while tasks are dynamically allocated (not known, all calls need an instance, garbage created) - and lambdas compile into compiler-generated classes.
Obviously C# and F# are strictly evaluated, so time-leak isn't a problem here. Same for JS. It can be a problem with replayable or cached observables though.

Are there real world applications that use metaprogramming?

We all know that MetaProgramming is a Concept of Code == Data (or programs that write programs).
But are there any applications that use it & what are the advantages of using it?
This Question can be closed but i didnt see any related questions.
IDEs are full with metaprogramming:
code completion
code generation
automated refactoring
Metaprogramming is often used to work around the limitations of Java:
code generation to work around the verbosity (e.g. getter/setter)
code generation to work around the complexity (e.g. generating Swing code from a WYSIWIG editor)
compile time/load time/runtime bytecode rewriting to work around missing features (AOP, Kilim)
generating code based on annotations (Hibernate)
Frameworks are another example:
generating Models, Views, Controllers, Helpers, Testsuites in Ruby on Rails
generating Generators in Ruby on Rails (metacircular metaprogramming FTW!)
In Ruby, you pretty much cannot do anything without metaprogramming. Even simply defining a method is actually running code that generates code.
Even if you just have a simple shell script that sets up your basic project structure, that is metaprogramming.
Since code as data is one of key concepts of Lisp, the best thing would be to see the real applications of projects written in these.
On this link you can see an article about a real world application written partly in Clojure, a dialect of Lisp.
The thing is not to write programs that write programs, just because you can, but to add new functionality to your language when you really need it. Just think if you could simply add new keyword to Java or C#...
If you implement metaprogramming in a language-independent way, you get a program analysis and transformation system. This is precisely a tool that treats (arbitrary) programs as data. These can be used to carry out arbitrary transformations on arbitrary programs.
It also means you aren't limited by the specific metaprogramming features that the compiler guys happened to put into your language. For instance, while C++ has templates, it has no "reflection". But a program transformation system can provide reflection even if the base langauge doesn't have it. In particular, having a program transformation engine means never having to say "I'm sorry, your language doesn't support metaprogramming (well enough) so I can't do much except write code manually".
See our DMS Software Reengineering Toolkit for such a program transformation system. It has been used to build test coverage and profiling tools, code generation tools, tools to reshape the architecture of large scale C++ applications, tools to migrate applications from one langauge to another, ... This is all extremely practical. Most of the tasks done with DMS would completely impractical to do by hand.
Not a real world application, but a talk about metaprogramming in ruby:
http://video.google.com/videoplay?docid=1541014406319673545
Google TechTalks August 3, 2006 Jack Herrington, the author of Code Generation in Action (Manning, July 2003) , will talk about code generation techniques using Ruby. He will cover both do-it-yourself and off-the-shelf solutions in a conversation about where Ruby is as a tool, and where it's going.
A real world example would be Django's model metaclass. It is the class of the class, from which models inherit from and responsible for the outfit of the model instances with all their attributes and methods.
Any ORM in a dynamic language is an instant example of practical metaprogramming. E.g. see how SQLAlchemy or Django's ORM creates classes for tables it discovers in the database, dynamically, in runtime.
ORMs and other tools in Java world that use #annotations to modify class behavior do a bit of metaprogramming, too.
Metaprogramming in C++ allows you to write code that will get transformed at compilation.
There are a few great examples I know about (google for them):
Blitz++, a library to write efficient code for manipulating arrays
Intel Array Building Blocks
CGAL
Boost::spirit, Boost::graph
Many compilers and interpreters are implemented with metaprogramming techniques internally - as a chain of code rewriting passes.
ORMs, project templates, GUI code generation in IDEs had been mentioned already.
Domain Specific Languages are widely used, and the best way to implement them is to use metaprogramming.
Things like Autoconf are obviously cases of metaprogramming.
Actually, it's unlikely one can find an area of software development which won't benefit from one or another form of metaprogramming.

Most dynamic dynamic programming language [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It seems I've got to agree with this post when it states that
[...] code in dynamically typed languages follows static-typing conventions
Much dynamic language code I encounter does indeed seem to be quite static (thinking of PHP) whereas dynamic approaches look somewhat clumsy or unnecessary instead.
Most of the time, it's just about omitting type signatures, which, in the context of type-inference/structural typing, doesn't even have to imply dynamic typing at all.
So my question (and it's not meant to be too subjective) is, in which dynamic languages or fields of application are all these more advanced dynamic language features (that couln't be replicated in static/compiled languages that easily) actually and idomatically used.
Examples:
Reflection
First-class continuations
Runtime object alteration/generation
Metaprogramming
Run-time code evaluation
Non-existent member behaviour
What are useful applications for such techniques?
Some examples of widespread application of the above techniques are:
Continuations make their appearance in web frameworks like Rails or Seaside. They can be used to allow an API to fake a local context. In Seaside or Rails this makes the API behave much more like a local GUI form handler than an HTTP request handler, which serves to simplify the task of coding the application's user interface elements. However, although many dynamic languages have strong support for continuations they are certainly not unique to this type of language.
Reflection is quite widely used for O/R mappers and serialisation, but many statically typed langages support reflection as well. On duck typed languages it can be used to find out at runtime if a facility is implemented by looking at the object's metadata. Some O/R mappers (and similar tools) work by implementing accesses to instance variables and redirecting the updates to a cached record in the data access layer. This helps to make the persistence relatively transparent to the developer as the field accesses look much like local variables.
Runtime object alteration is slightly useful (think monkey-patching) but mostly a gimmick. There aren't many really killer uses for it that come to mind immediately, but people certainly do use it. One possible use for it is fixing slightly broken behaviour when subclassing is not an option for some reason.
Metaprogramming is quite a fuzzy definition for a term, but arguably generics and C++ templates are an example of metaprogramming - taking place on statically typed languages. On languages with metaclass support, custom metaclasses can be used to implement particular behaviours such as singletons or object registries.Another metaprogramming example is Smalltalk's #notImplemented: method which is called on attempts to invoke nonexistent methods. The method name and parameters are supplied to the implementor of #notImplemented:, and can subsequently be used to construct a method invocation reflectively. Trapping this can be used (for example) to implement generic proxy mechanisms.
LISP programmers would argue that LISP is the most dynamic language of all due to its first class support for diddling directly with the parse trees of the code (known as 'macros'). This facility makes implementing DSLs trivial in LISP - and integrating them transparently into your code base.
All features you enumerate are also available in statically typed languages some with constraints.
Reflection: Present in Java, C# (not type safe).
First-class continuations: restricted support in Scala (maybe others)
Runtime object alteration: Changing the type of an object is supported in a restriced form in C# with extension methods (will be in Java 7) and implicit type conversions in Scala. Although open class is not supported most of the use cases are covered by type conversions.
Metaprogramming: I would say Metaprogramming is the heading for a lot of related features like reflection, type changes at runtime, AOP etc.
So there is not a lot left that is supported only by dynamic languages to discuss. Support for example for Reflection circumvents the type system but it is useful in certain situations where this kind of flexibility is needed. The same is true in dynamic languages.
The open class feature supported by Ruby is something that compiled languages will never support. It is the most flexible form of Metaprogramming possible (with all the implications: security, performance, maintainability.) You can change classes of the platform. It's used by Ruby on Rails to create methods of domain objects from metadata on the fly. In a statically typed language you have at least to create (or generate the code of) the interface of your domain object.
If you're looking for the "most dymanic languages" all homoiconic languages like LISP and Prolog are good candidates. Interestingly, C# is somewhat homoiconic with the expression trees in LINQ.
You should visit Douglas Crockford's Wrrrld Wide Web and see his wizardry over Javascript. Javascript is usually written in pretty straightforward and simple manner, like slightly simplified C. But it's only the surface. The unmutable keywords are a small percent of the language power. Most of it lies in objects and methods exported by the system, and these are fully mutable. You can replace/extend methods on the fly, you can replace pretty deeply rooted system methods, nest eval(), load generated <SCRIPT> on the fly, and so on. This is usable in writing all kinds of language extensions, frameworks, toolboxes and such. Instead of 200 lines of code of your program in straightforward Javascript, you write 50 lines that modify how Javascript work, and another 50 that use the new syntax to get the work done. You can generate whole pages on the fly, including JS embedded in them. You turn webpage structure into data storage. You replace frequently used methods of popular objects, and your own, to change their behavior on the fly, changing not only looks but also function of a webpage in one click.
It really feels like Javascript becomes a metalanguage to modify the Javascript engine, and make Javascript function like a different language, then you further modify it using the already modified, and your actual, final app takes a dozen of extremely intuitive lines getting the language do exactly what it needs. Oh, and patches the countless bugs and shortcomings of Javascript implementation on MSIE in the process.
I won't claim Lisp is the "most dynamic" (I'm not even sure what that means), but Lisp programmers frequently do things that are difficult-to-impossible in other languages:
create new control structures
create new syntax for existing constructs (I think every metaclass I've ever seen has its own defwhatever form)
extend the runtime (every .emacs is a runtime extension, e.g., what would it take to write calendar-mode for another editor?)
Yegge talks about it some here w.r.t. Emacs, e.g., parse XML by converting it to s-expressions, writing functions for the tags you want to process, and actually running it.
Ultimately it's not languages that write dynamic code, it's programmers; and there's going to be a learning curve to adjust your patterns to styles you're not used to. So what types of work can make best use of dynamic capabilities? The first that comes to my mind is middleware; interfaces among heterogeneous systems; especially those with imperfectly documented APIs or APIs that change a lot, and data serialization is dynamic.
I'd say anywhere you see REST and jason being applied, you're more likely to find dynamic code, for instance, where javascript, php, perl, ruby, ... are popular at least partially because they are capable of dynamic adaptation.
Also, there's a lot of javascript browser code that deals with browser version and brand incmpatiblities using dynamic techniques.
Yes i feel JavaScript as good one.
JavaScript is so flexible that people working on different languages have different variants of it for them. Like Microsoft has Ajax library which has typical .NET/C# type syntax. Also there are some JavaScript libraries which uses $ which looks similar like PHP syntaxes. Its all there because JavaScript is bueaty How many other languages one can tell which can facilitates something like this?
And one should know about the JavaScript closure feature which is state of art and help create amazing algorithms with great results.

What languages implement features from functional programming?

Lisp developed a set of interesting language features quite early on in the academic world, but most of them never caught on in production environments.
Some languages, like JavaScript, adapted basic features like garbage collection and lexical closures, but all the stuff that might actually change how you write programs on a large scale, like powerful macros, the code-as-data thing and custom control structures, only seems to propagate within other functional languages, none of which are practical to use for non-trivial projects.
The functional programming community also came up with a lot of other interesting ideas (apart from functional programming itself), like referential transparency, generalised case-expressions (ie, pattern-matching, not crippled like C/C# switches) and curried functions, which seem obviously useful in regular programming and should be easy to integrate with existing programming practice, but for some reason seem to be stuck in the academic world forever.
Why do these features have such a hard time getting adopted? Are there any modern, practical languages that actually learn from Lisp instead of half-assedly copying "first class functions", or is there an inherent conflict that makes this impossible?
Are there any modern, practical
languages that actually learn from
Lisp instead of half-assedly copying
"first class functions", or is there
an inherent conflict that makes this
impossible?
Why aren't lisp, haskell, ocaml, or f# modern?
You might just need to take it on yourself and look at them and realize that they are more robust, with libraries like java, then you'd think.
A lot of features have been adopted from functional languages to other languages. But vice versa -- (some) functional languages have objects, for example.
I suggest you try Clojure. Syntactically beautiful dialect, functional (in the ML sense), and fast. You get immutability, software transactional memory, multiversion concurrency control, a REPL, SLIME support, and an inexhaustible FFI. It's the Lisp (& Haskell) for the Business Programmer. I'm having a great time using it daily in my real job.
There is no known correlation between a language "catching on" and whether or not is has powerful, well researched, well designed features.
A lot has been said on the subject. It exists all over the place in technology, and also the arts. We know artist A has more training and produces works of greater breadth and depth than artist B, yet artist B is far more successful in the marketplace. Is it because there's a zeitgeist? Is is because artist B has better marketing? Is it because most people won't take the time to understand artist A? Maybe artist B is secretly awful and we should mistrust experts who make judgements about artists? Probably all of the above, to some degree or another.
This drives people who study the arts, and people who study programming languages, crazy.
Scala is a cool functional/OO language with pattern matching, first class functions, and the like. It has the advantage of compiling to Java bytecode and inter-operates well with Java code.
Common Lisp, used in the real-world albeit not wildely so, I guess.
Python or Ruby. See Paul Graham's thoughts on this in the question "I like Lisp but my company won't let me use it. What should I do?".
Scala is the absolute king of languages which have adopted significant academic features. Higher kinds, self types, polymorphic pattern matching, etc. All of these are bleeding-edge (or near to it) academic research topics that have been incorporated into Scala as fundamental features. Arguably, this has been to the detriment of the langauge's simplicity, but it does lead to some very interesting patterns.
C# is more mainstream than Scala, but it also has adopted fewer of these "out-there" functional features. LINQ is a limited implementation for Wadler's generalized list comprehensions, and everyone knows about lambdas. But for all that, C# (rightfully) remains a bit conservative in adopting research features from the academic world.
Erlang has recently gained renewed exposure not only through being used by Twitter, but also by the rise of XMPP driven messaging and implementations such as ejabberd. It sports many of the ideas coming from functional programming being a language designed with that in mind. Initially used to run Telephone switches and conceived by Ericson to run the first GSM networks. It is still around, it is fully functional (as a language) and used in many production environments.
Lua.
It's used as a scripting/extension language for a number of games (like World of Worcraft), and applications (Snort, NMAP, Wireshark, etc). In fact, according to an Adobe developer, Adobe's Lightroom is over 40% Lua.
The guys behind Lua have repeatedly listed Scheme and Lisp as major influences on Lua, and Lua has even been described as Scheme without the parentheses.
Have you checked out F#
Lot's of dynamic programming languages implement ideas from functional programming. The newer .Net languages (C# and VB) have what they call lambda's but these aren't side effect free.
It's not difficult combining concepts from functional programming and object oriented programming for example but it doesn't always make a lot of sense. Object oriented languages (try to) encapsulate state inside objects while functional languages encapsulate state inside functions. If you combine objects and functions in one language it gets harder to make sense of all this.
There have been a lot of languages that have combined these paradigms by just throwing them together (F#) and this can be usefull but I think we still need a couple of decades of playing with languages like this untill we can create a new paradigm that succesfully will combine the ideas from oo and functional programming.
C# 3.0 definitely does.
C# now has
Lambda Expressions
Higher Order Functions
Map / Reduce + Filter ( Folding?) to lists and all types which implement IEnumerable.
LINQ
Object + Collection Initializers.
The last two list items may not fall under proper functional programming, anyways the answer is C# has implemented many useful concepts from Lisp etc.
In addition to what was said, a lot of LISP goodness is based on guaranteed lack of side-effects and using built-in data structures. Both rarely hold in real world. ML is probably better functional base.
Lisp developed a set of interesting language features quite early on in the academic
world, but most of them never caught on in production environments.
Because the kind of people who manage software developers aren't the kinds of people who you can have an interesting chat comparing different language features with. Around 2000, I wanted to use LISP to implement XML-to-HTML transforms on our corporate website (this is around the time of Amazon implementing their backend in LISP). I didn't get to. This is mildly ironic seeing as the company I was working for made and sold a Common LISP environment.
Another "real-world" language that implements functional programming features is Javascript. Since absolutely everything has a value, then high-order functions are easily implemented. You also have other tenants of functional programming such as lambda functions, closures, and currying.
The features you refer to ("powerful" macros, the code-as-data thing and custom control structures) have not propagated within other functional languages. They died after Lisp taught us that they are a bad idea.
Modern functional languages (OCaml, Haskell, Erlang, Scala, F#, C# 3.0, JavaScript) do not have those features.
Cheers,
Jon Harrop.

Resources