Related
There are some frameworks out there for dynamic bytecode generation, manipulation and weaving (BCEL, CGLIB, javassist, ASM, MPS). I want to learn about them, but since I don't have much time to know all the details about all of them, I would like to see a sort of comparison chart saying the advantages and disadvantages of one versus the others and an explanation of why.
Here in SO, I found a lot of questions asking something similar, and the answers normally said "you can use cglib or ASM", or "javassist is better than cglib", or "BCEL is old and is dying" or "ASM is the best because it gives X and Y". These answers are useful, but does not fully answer the question in the scope that I want, comparing them more deeply and giving the advantages and disadvantages of each one.
Analysis of bytecode libraries
As I can tell from the answers you got here and ones in the questions that you have looked at, these answers do not formally address the question in the explicit manner you have stated. You asked for a comparison, meanwhile these answers have vaguely stated what one might want based on what your target is (e.g. Do you need to know bytecode? [y/n]), or are too narrow.
This answer is a short analysis of each bytecode framework, and provides a quick comparison at the end.
Javassist
Tiny (javassist.jar (3.21.0) is ~707KB / javassist-rel_3_22_0_cr1.zip is ~1.5MB)
High(/Low)-level
Straightforward
Feature-complete
Requires minimal to no class file format knowledge
Requires moderate Java instruction set knowledge
Minimal learning effort
Has some quirks in the single-line/multi-line compile-and-insert-bytecode methods
I personally prefer Javassist simply because of how quickly you can get to using it and building and manipulating classes with it. The tutorial is straightforward and easy to follow. The jar file is a tiny 707KB, so it is nice and portable; makes it suitable for standalone applications.
ASM
Large (asm-6.0_ALPHA-bin.zip is ~2.9MB / asm-svn-latest.tar.gz (10/15/2016) is ~41MB)
Low(/High)-level
Comprehensive
Feature-complete
Recommend a proficient knowledge of class file format
Requires proficiency with Java instruction set
Moderate learning effort (somewhat complex)
ASM by ObjectWeb is a very comprehensive library which lacks nothing related to building, generating, and loading classes. In fact, it even has class analysis tools with predefined analyzers. It is said to be the industry standard for bytecode manipulation. It is also the reason why I steer clear away from it.
When I see examples of ASM, it seems like a cumbersome beast of a task with the number of lines it takes to modify or load a class. Even some of the parameters to some methods seem a bit cryptic and out of place for Java. With things like ACC_PUBLIC, and plenty of method calls with null everywhere, it honestly does look like it is better suited for a low-level language like C. Why not simply just pass a String literal like "public", or an enum Modifier.PUBLIC? It's more friendly and easy to use. That is my opinion, however.
For reference, here is an ASM (4.0) tutorial: https://www.javacodegeeks.com/2012/02/manipulating-java-class-files-with-asm.html
BCEL
Small (bcel-6.0-bin.zip is 7.3MB / bcel-6.0-src.zip is 1.4MB)
Low-level
Adequate
Gets the job done
Requires proficiency with Java instruction set
Easy to learn
From what I have seen, this library is your basic class library that lets you do everything you need to—if you can spare a few months or years.
Here is a BCEL tutorial that really spells it out: http://www.geekyarticles.com/2011/08/manipulating-java-class-files-with-bcel.html?m=1
cglib
Very tiny (cglib-3.2.5.jar is 295KB/source code)
Depends on ASM
High-level
Feature-complete (Bytecode Generation)
Little or no Java bytecode knowledge needed
Easy to learn
Esoteric Library
Despite the fact that you can read information from classes, and that you can transform classes, the library seems tailored to proxies. The tutorial is all about beans for the proxies, and it even mentions it is used by "data access frameworks to generate dynamic proxy objects and intercept field access." Still, I see no reason why you can't use it for the more simple purpose of bytecode manipulation instead of proxies.
ByteBuddy
Small bin/"Huge" src (by comparison) (byte-buddy-dep-1.8.12.jar is ~2.72 MB / 1.8.12 (zip) is 124.537 MB (exact))
Depends on ASM
High-level
Feature-complete
Personally, a peculiar name for a Service Pattern class (ByteBuddy.class)
Little or no Java byte code knowledge needed
Easy to learn
Long story short, where BCEL is lacking, ByteBuddy is abundant. It uses a primary class called ByteBuddy using the Service Design Pattern. You create a new instance of ByteBuddy, and this represents a class that you want to modify. When you are done with your modifications, you can then make a DynamicType with make().
On their website is a full tutorial with API documentation. The purpose seems to be for rather high-level modifications. When it comes to methods, there does not appear to be anything in the official tutorial, or any 3rd party tutorial, about creating a method from scratch, apart from delegating a method (EDITME if you know where this is explained).
Their tutorial can be found here on their website. Some examples can be found here.
Java Class Assistant (jCLA)
I have my own bytecode library that I am building, which will be called Java Class Assistant, or jCLA for short, because of another project I am working on and because of said quirks with Javassist, but I will not be releasing it to GitHub until it is finished but the project is currently available to browse on GitHub and give feedback on as it is currently in alpha, but still workable enough to be a basic class library (currently working on the compilers; please help me if you can! It will be released a lot sooner!).
It will be quite bare bones with the ability to read and write class files to and from a JAR file, as well as the ability to compile and decompile bytecode to and from source code and class files.
The overall usage pattern makes it rather easy to work with jCLA, though it may take some getting used to and is apparently quite similar to ByteBuddy in its style of methods and method parameters for class modifications:
import jcla.ClassPool;
import jcla.ClassBuilder;
import jcla.ClassDefinition;
import jcla.MethodBuilder;
import jcla.FieldBuilder;
import jcla.jar.JavaArchive;
import jcla.classfile.ClassFile;
import jcla.io.ClassFileOutputStream;
public class JCLADemo {
public static void main(String... args) {
// get the class pool for this JVM instance
ClassPool classes = ClassPool.getLocal();
// get a class that is loaded in the JVM
ClassDefinition classDefinition = classes.get("my.package.MyNumberPrinter");
// create a class builder to modify the class
ClassBuilder clMyNumberPrinter= new ClassBuilder(classDefinition);
// create a new method with name printNumber
MethodBuilder printNumber = new MethodBuilder("printNumber");
// add access modifiers (use modifiers() for convenience)
printNumber.modifier(Modifier.PUBLIC);
// set return type (void)
printNumber.returns("void");
// add a parameter (use parameters() for convenience)
printNumber.parameter("int", "number");
// set the body of the method (compiled to bytecode)
// use body(byte[]) or insert(byte[]) for bytecode
// insert(String) also compiles to bytecode
printNumber.body("System.out.println(\"the number is: \" + number\");");
// add the method to the class
// you can use method(MethodDefinition) or method(MethodBuilder)
clMyNumberPrinter.method(printNumber.build());
// add a field to the class
FieldBuilder HELLO = new FieldBuilder("HELLO");
// set the modifiers for hello; convenience method example
HELLO.modifiers(Modifier.PRIVATE, Modifier.STATIC, Modifier.FINAL);
// set the type of this field
HELLO.type("java.lang.String");
// set the actual value of this field
// this overloaded method expects a VariableInitializer production
HELLO.value("\"Hello from \" + getClass().getSimpleName() + \"!\"");
// add the field to the class (same overloads as clMyNumberPrinter.method())
clMyNumberPrinter.field(HELLO.build());
// redefine
classDefinition = clMyNumberPrinter.build();
// update the class definition in the JVM's ClassPool
// (this updates the actual JVM's loaded class)
classes.update(classDefinition);
// write to disk
JavaArchive archive = new JavaArchive("myjar.jar");
ClassFile classFile = new ClassFile(classDefinition);
ClassFileOutputStream stream = new ClassFileOutputStream(archive);
try {
stream.write(classFile);
} catch(IOException e) {
// print to System.out
} finally {
stream.close();
}
}
}
(VariableInitializer production specification for your convenience.)
As may be implied from the above snippet, each ClassDefinition is immutable. This makes jCLA more secure, thread-safe, network-safe, and easy to use. The system revolves primarily around ClassDefinitions as the object of choice for querying information about a class in a high-level manner, and the system is built in such a way that ClassDefinition is converted to and from target types such as ClassBuilder and ClassFile.
jCLA uses a tiered system for class data. At the bottom, you have the immutable ClassFile: a struct or software representation of a class file. Then you have immutable ClassDefinitions which are converted from ClassFiles into something less cryptic and more manageable and useful to the programmer who is modifying or reading data from the class, and is comparable to information accessed through java.lang.Class. Finally, you have mutable ClassBuilders. The ClassBuilder is how classes are modified or created. It allows that you can create a ClassDefinition directly from the builder from its current state. Creating a new builder for each class is not necessary as the reset() method will clear the variables.
(Analysis of this library will be available as soon as it is ready for release.)
But until then, as of today:
Small (src: 227.704 KB exact, 6/2/2018)
Self-sufficient (no dependencies except Java's shipped library)
High-level
No required knowledge of java bytecode or class files (for tier 1 API, e.g. ClassBuilder, ClassDefinition, etc.)
Easy to learn (even easier if coming from ByteBuddy)
I still recommend learning about java bytecode however. It will make debugging easier.
Comparison
Considering all of these analyses (excluding jCLA for now), the broadest framework is ASM, the easiest to use is Javassist, the most basic implementation is BCEL, and the most high-level for bytecode generation and proxies is cglib.
ByteBuddy deserves its own explanation. It is easy to use like Javassist, but appears to be lacking some of the features that make Javassist great, such as method creation from scratch, so you would need to use ASM for that apparently. If you need to do some lightweight modification with classes, ByteBuddy is the way to go, but for more advanced modification of classes while maintaining a high level of abstraction, Javassist is a better choice.
Note: if I missed a Library, please edit this answer or mention it in a comment.
If your interest in bytecode generation is only to use it, the comparison chart becomes rather simple :
Do you need to understand bytecode?
for javassist : no
for all others : yes
Of course, even with javassist you may be confronted with bytecode concepts at some point. Likewise, some of the other libraries (such as ASM) have a more high-level api and/or tool support to shield you from many of the bytecode details.
What really distinguishes javassist though, is the inclusion of a basic java compiler. This makes it very easy to write complex class transformations : you only have to put a java fragment in a String and use the library to insert it at specific points in the program. The included compiler will build the equivalent bytecode, which is then inserted into the existing classes.
First of all it all depends on your task. Do you want to generate the new code or analyze existing bytecode and how complex analysis you may need. Also how much time you want to invest into learning Java bytecode. You can break down bytecode framework into ones that provide a high level API, that allows you to get away from learning low level opcodes and JVM internals (e.g, javaassist and CGLIB) and low level frameworks when you need to understand JVM or use some bytecode generation tools (ASM and BCEL). For analyzis BCEL historically evolved a bit more, but ASM provides a decent functionality that is easy to extend. Also note, ASM is probably the only framework that provides the most advanced support for STACK_MAP information required by the new bytecode verifier enabled by default in Java 7.
What qualifies a programming language to be called dynamic language? What sort of problems should I use a dynamic programming language to solve? What is the main difference between static programming languages and dynamic programming languages?
I don't think there is black and white here - there is a whole spectrum between dynamic and static.
Let's take two extreme examples for each side of the spectrum, and see where that takes us.
Haskell is an extreme in the static direction.
It has a powerful type system that is checked at compile time: If your program compiles it is free from common and not so common errors.
The compiled form is very different from the haskell program (it is a binary). Consequently runtime reflection and modification is hard, unless you have foreseen it. In comparison to interpreting the original, the result is potentially more efficient, as the compiler is free to do funky optimizations.
So for static languages I usually think: fairly lengthy compile-time analysis needed, type system will prevent me from making silly mistakes but also from doing some things that are actually valid, and if I want to do any manipulation of a program at runtime, it's going to be somewhat of a pain because the runtime representation of a program (i.e. its compiled form) is different from the actual language itself. Also it could be a pain to modify things later on if I have not foreseen it.
Clojure is an extreme in the dynamic direction.
It too has a type system, but at compile time there is no type checking. Many common errors can only be discovered by running the program.
Clojure programs are essentially just Clojure lists (the data structure) and can be manipulated as such. So when doing runtime reflection, you are actually processing a Clojure program more or less as you would type it - the runtime form is very close to the programming language itself. So you can basically do the same things at runtime as you could at "type time". Consequently, runtime performance may suffer because the compiler can't do many up-front optimizations.
For dynamic languages I usually think: short compilation step (basically just reading syntax), so fast and incremental development, practically no limits to what it will allow me to do, but won't prevent me from silly mistakes.
As other posts have indicated, other languages try to take more of a middle ground - e.g. static languages like F# and C# offer reflection capabilities through a separate API, and of course can offer incremental development by using clever tools like F#'s REPL. Dynamic languages sometimes offer optional typing (like Racket, Strongtalk), and generally, it seems, have more advanced testing frameworks to offset the lack of any sanity checking at compile time. Also type hints, while not checked at compile time, are useful hints to generate more efficient code (e.g. Clojure).
If you are looking to find the right tool for a given problem, then this is certainly one of the dimensions you can look at - but by itself is not likely to force a decision either way. Have a think about the other properties of the languages you are considering - is it a functional or OO or logic or ... language? Does it have a good framework for the things I need? Do I need stability and binary backwards compatibility, or can I live with some churn in the compiler? Do I need extensive tooling?Etc.
Dynamic language does many tasks at runtime where a static language would do them at compile-time.
The tasks in question are usually one or more of: type system, method dispatch and code generation.
Which also pretty much answers the questions about their usage.
There are a lot of different definitions in use, but one possible difference is:
A dynamic language typically uses dynamic typing.
A static language typically uses static typing.
Some languages are difficult to classify as either static or dynamically typed. For example, C# is traditionally regarded as a statically typed language, but C# 4.0 introduced a static type called dynamic which behaves in some ways more like a dynamic type than a static type.
What qualifies a programming language to be called dynamic language.
Dynamic languages are generally considered to be those that offer flexibility at run-time. Note that this does not necessarily conflict with static type systems. For example, F# was recently voted "favorite dynamic language on .NET" at a conference even though it is statically typed. Many people consider F# to be a dynamic language because it offers run-time features like meta-circular evaluation, a Read-Evaluate-Print-Loop (REPL) and dynamic typing (of sorts). Also, type inference means that F# code is not littered with type declarations like most statically typed languages (e.g. C, C++, Java, C# 2, Scala).
What are the problems for which I should go for dynamic language to solve.
In general, provided time and space are not of critical importance you probably always want to use languages with run-time flexibility and capabilities like run-time compilation.
This thread covers the issue pretty well:
Static/Dynamic vs Strong/Weak
The question is asked during Dynamic Languages Wizards Series - Panel on Language Design (at 24m 04s).
Answer from Jonathan Rees:
You know one when you see one
Answer from Guy Steele:
A dynamic language is one that defers as many decisions as possible to run time.
For example about array size, the number of data objects to allocate, decisions like that.
The concept is deferring until runtime, that's what I understand to be dynamic.
We all know that MetaProgramming is a Concept of Code == Data (or programs that write programs).
But are there any applications that use it & what are the advantages of using it?
This Question can be closed but i didnt see any related questions.
IDEs are full with metaprogramming:
code completion
code generation
automated refactoring
Metaprogramming is often used to work around the limitations of Java:
code generation to work around the verbosity (e.g. getter/setter)
code generation to work around the complexity (e.g. generating Swing code from a WYSIWIG editor)
compile time/load time/runtime bytecode rewriting to work around missing features (AOP, Kilim)
generating code based on annotations (Hibernate)
Frameworks are another example:
generating Models, Views, Controllers, Helpers, Testsuites in Ruby on Rails
generating Generators in Ruby on Rails (metacircular metaprogramming FTW!)
In Ruby, you pretty much cannot do anything without metaprogramming. Even simply defining a method is actually running code that generates code.
Even if you just have a simple shell script that sets up your basic project structure, that is metaprogramming.
Since code as data is one of key concepts of Lisp, the best thing would be to see the real applications of projects written in these.
On this link you can see an article about a real world application written partly in Clojure, a dialect of Lisp.
The thing is not to write programs that write programs, just because you can, but to add new functionality to your language when you really need it. Just think if you could simply add new keyword to Java or C#...
If you implement metaprogramming in a language-independent way, you get a program analysis and transformation system. This is precisely a tool that treats (arbitrary) programs as data. These can be used to carry out arbitrary transformations on arbitrary programs.
It also means you aren't limited by the specific metaprogramming features that the compiler guys happened to put into your language. For instance, while C++ has templates, it has no "reflection". But a program transformation system can provide reflection even if the base langauge doesn't have it. In particular, having a program transformation engine means never having to say "I'm sorry, your language doesn't support metaprogramming (well enough) so I can't do much except write code manually".
See our DMS Software Reengineering Toolkit for such a program transformation system. It has been used to build test coverage and profiling tools, code generation tools, tools to reshape the architecture of large scale C++ applications, tools to migrate applications from one langauge to another, ... This is all extremely practical. Most of the tasks done with DMS would completely impractical to do by hand.
Not a real world application, but a talk about metaprogramming in ruby:
http://video.google.com/videoplay?docid=1541014406319673545
Google TechTalks August 3, 2006 Jack Herrington, the author of Code Generation in Action (Manning, July 2003) , will talk about code generation techniques using Ruby. He will cover both do-it-yourself and off-the-shelf solutions in a conversation about where Ruby is as a tool, and where it's going.
A real world example would be Django's model metaclass. It is the class of the class, from which models inherit from and responsible for the outfit of the model instances with all their attributes and methods.
Any ORM in a dynamic language is an instant example of practical metaprogramming. E.g. see how SQLAlchemy or Django's ORM creates classes for tables it discovers in the database, dynamically, in runtime.
ORMs and other tools in Java world that use #annotations to modify class behavior do a bit of metaprogramming, too.
Metaprogramming in C++ allows you to write code that will get transformed at compilation.
There are a few great examples I know about (google for them):
Blitz++, a library to write efficient code for manipulating arrays
Intel Array Building Blocks
CGAL
Boost::spirit, Boost::graph
Many compilers and interpreters are implemented with metaprogramming techniques internally - as a chain of code rewriting passes.
ORMs, project templates, GUI code generation in IDEs had been mentioned already.
Domain Specific Languages are widely used, and the best way to implement them is to use metaprogramming.
Things like Autoconf are obviously cases of metaprogramming.
Actually, it's unlikely one can find an area of software development which won't benefit from one or another form of metaprogramming.
I'm sketching a design of something (machine learning of functions) that will preferably want a functional programming language, and also introspection, specifically the ability to examine the program's own code in some nicely tractable format, and preferably also the ability to get machine generated code compiled at runtime, and I'm wondering what's the best language to write it in. Lisp of course has strong introspection capabilities, but the statically typed languages also have advantages; the ones I'm considering are:
F# - the .Net platform has a good story here, you can read byte code at run time and also emit byte code and get it compiled; I assume there's no problem accessing these facilities from F#.
Haskell, Ocaml - do these have similar facilities, either via byte code or parse tree?
Are there other languages I should also be looking at?
Haskell's introspection mechanism is Template Haskell, which supports compile time metaprogramming, and when combined with e.g. llvm, provides runtime metaprogramming facilities.
Ocaml has:
Camlp4 to manipulate Ocaml concrete syntax trees in Ocaml. The maintained implementation of Camlp4 is Camlp5.
MetaOCaml for full-scale multi-stage programming.
Ocamljit to generate native code at run time, but I don't think it's been maintained recently.
Ocaml-Java to compile Ocaml code for the Java virtual machine. I don't know if there are nice reflection capabilities.
Not really an answer, but note also the F# Quotations feature and library, for more homoiconicity stuff.
You might check out the typed variant of Racket (previously known as PLT Scheme). It retains most of the syntactic simplicity of Scheme, but provides a static type system. Since Racket is a Scheme, metaprogramming is par for the course, and the runtime can emit native code by way of a JIT.
The Haskell approach would be more along the lines of parsing the source. The Haskell Platform includes a complete source parser, or you can use the GHC API to get access that way.
I'd also look at Scala or Clojure which come with them all the libraries that have been developed for Java. You'll never need to worry if a library does not exist. But more to the point of your question, these languages give you the same reflection (or more powerful types) that you will find within Java.
I'm sketching a design of something (machine learning of functions) that will preferably want a functional programming language, and also introspection, specifically the ability to examine the program's own code in some nicely tractable format, and preferably also the ability to get machine generated code compiled at runtime, and I'm wondering what's the best language to write it in. Lisp of course has strong introspection capabilities, but the statically typed languages also have advantages; the ones I'm considering are:
Can you not just parse the source code like an ordinary interpreter or compiler? Why do you need introspection?
F# - the .Net platform has a good story here, you can read byte code at run time and also emit byte code and get it compiled; I assume there's no problem accessing these facilities from F#.
F# has a rudimentary quotation mechanism but you can only quote some expressions and not other kinds of code, most notably type definitions. Also, its evaluation mechanism is orders of magnitude slower than genuine compilation so it is basically completely useless. You can use reflection to analyze type definitions but, again, it is quite rudimentary.
You can read byte code but that has been compiled so a lot of information and structure has been lost.
F# also has lexing and parsing technology (most notably fslex, fsyacc and FParsec) but it is not as mature as OCaml's.
Haskell, Ocaml - do these have similar facilities, either via byte code or parse tree?
Haskell has Template Haskell but I've never heard of anyone using it (abandonware?).
OCaml has its Camlp4 macro system and a few people do use it but it is poorly documented.
As for lexing and parsing, Haskell has a few libraries (most notably Parsec) and OCaml has many libraries.
Are there other languages I should also be looking at?
Term rewrite languages like Mathematica would be an obvious choice because they make it trivial to manipulate code. The Pure language might be of interest.
You might also consider MetaOCaml for its run-time compilation capabilities.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It seems I've got to agree with this post when it states that
[...] code in dynamically typed languages follows static-typing conventions
Much dynamic language code I encounter does indeed seem to be quite static (thinking of PHP) whereas dynamic approaches look somewhat clumsy or unnecessary instead.
Most of the time, it's just about omitting type signatures, which, in the context of type-inference/structural typing, doesn't even have to imply dynamic typing at all.
So my question (and it's not meant to be too subjective) is, in which dynamic languages or fields of application are all these more advanced dynamic language features (that couln't be replicated in static/compiled languages that easily) actually and idomatically used.
Examples:
Reflection
First-class continuations
Runtime object alteration/generation
Metaprogramming
Run-time code evaluation
Non-existent member behaviour
What are useful applications for such techniques?
Some examples of widespread application of the above techniques are:
Continuations make their appearance in web frameworks like Rails or Seaside. They can be used to allow an API to fake a local context. In Seaside or Rails this makes the API behave much more like a local GUI form handler than an HTTP request handler, which serves to simplify the task of coding the application's user interface elements. However, although many dynamic languages have strong support for continuations they are certainly not unique to this type of language.
Reflection is quite widely used for O/R mappers and serialisation, but many statically typed langages support reflection as well. On duck typed languages it can be used to find out at runtime if a facility is implemented by looking at the object's metadata. Some O/R mappers (and similar tools) work by implementing accesses to instance variables and redirecting the updates to a cached record in the data access layer. This helps to make the persistence relatively transparent to the developer as the field accesses look much like local variables.
Runtime object alteration is slightly useful (think monkey-patching) but mostly a gimmick. There aren't many really killer uses for it that come to mind immediately, but people certainly do use it. One possible use for it is fixing slightly broken behaviour when subclassing is not an option for some reason.
Metaprogramming is quite a fuzzy definition for a term, but arguably generics and C++ templates are an example of metaprogramming - taking place on statically typed languages. On languages with metaclass support, custom metaclasses can be used to implement particular behaviours such as singletons or object registries.Another metaprogramming example is Smalltalk's #notImplemented: method which is called on attempts to invoke nonexistent methods. The method name and parameters are supplied to the implementor of #notImplemented:, and can subsequently be used to construct a method invocation reflectively. Trapping this can be used (for example) to implement generic proxy mechanisms.
LISP programmers would argue that LISP is the most dynamic language of all due to its first class support for diddling directly with the parse trees of the code (known as 'macros'). This facility makes implementing DSLs trivial in LISP - and integrating them transparently into your code base.
All features you enumerate are also available in statically typed languages some with constraints.
Reflection: Present in Java, C# (not type safe).
First-class continuations: restricted support in Scala (maybe others)
Runtime object alteration: Changing the type of an object is supported in a restriced form in C# with extension methods (will be in Java 7) and implicit type conversions in Scala. Although open class is not supported most of the use cases are covered by type conversions.
Metaprogramming: I would say Metaprogramming is the heading for a lot of related features like reflection, type changes at runtime, AOP etc.
So there is not a lot left that is supported only by dynamic languages to discuss. Support for example for Reflection circumvents the type system but it is useful in certain situations where this kind of flexibility is needed. The same is true in dynamic languages.
The open class feature supported by Ruby is something that compiled languages will never support. It is the most flexible form of Metaprogramming possible (with all the implications: security, performance, maintainability.) You can change classes of the platform. It's used by Ruby on Rails to create methods of domain objects from metadata on the fly. In a statically typed language you have at least to create (or generate the code of) the interface of your domain object.
If you're looking for the "most dymanic languages" all homoiconic languages like LISP and Prolog are good candidates. Interestingly, C# is somewhat homoiconic with the expression trees in LINQ.
You should visit Douglas Crockford's Wrrrld Wide Web and see his wizardry over Javascript. Javascript is usually written in pretty straightforward and simple manner, like slightly simplified C. But it's only the surface. The unmutable keywords are a small percent of the language power. Most of it lies in objects and methods exported by the system, and these are fully mutable. You can replace/extend methods on the fly, you can replace pretty deeply rooted system methods, nest eval(), load generated <SCRIPT> on the fly, and so on. This is usable in writing all kinds of language extensions, frameworks, toolboxes and such. Instead of 200 lines of code of your program in straightforward Javascript, you write 50 lines that modify how Javascript work, and another 50 that use the new syntax to get the work done. You can generate whole pages on the fly, including JS embedded in them. You turn webpage structure into data storage. You replace frequently used methods of popular objects, and your own, to change their behavior on the fly, changing not only looks but also function of a webpage in one click.
It really feels like Javascript becomes a metalanguage to modify the Javascript engine, and make Javascript function like a different language, then you further modify it using the already modified, and your actual, final app takes a dozen of extremely intuitive lines getting the language do exactly what it needs. Oh, and patches the countless bugs and shortcomings of Javascript implementation on MSIE in the process.
I won't claim Lisp is the "most dynamic" (I'm not even sure what that means), but Lisp programmers frequently do things that are difficult-to-impossible in other languages:
create new control structures
create new syntax for existing constructs (I think every metaclass I've ever seen has its own defwhatever form)
extend the runtime (every .emacs is a runtime extension, e.g., what would it take to write calendar-mode for another editor?)
Yegge talks about it some here w.r.t. Emacs, e.g., parse XML by converting it to s-expressions, writing functions for the tags you want to process, and actually running it.
Ultimately it's not languages that write dynamic code, it's programmers; and there's going to be a learning curve to adjust your patterns to styles you're not used to. So what types of work can make best use of dynamic capabilities? The first that comes to my mind is middleware; interfaces among heterogeneous systems; especially those with imperfectly documented APIs or APIs that change a lot, and data serialization is dynamic.
I'd say anywhere you see REST and jason being applied, you're more likely to find dynamic code, for instance, where javascript, php, perl, ruby, ... are popular at least partially because they are capable of dynamic adaptation.
Also, there's a lot of javascript browser code that deals with browser version and brand incmpatiblities using dynamic techniques.
Yes i feel JavaScript as good one.
JavaScript is so flexible that people working on different languages have different variants of it for them. Like Microsoft has Ajax library which has typical .NET/C# type syntax. Also there are some JavaScript libraries which uses $ which looks similar like PHP syntaxes. Its all there because JavaScript is bueaty How many other languages one can tell which can facilitates something like this?
And one should know about the JavaScript closure feature which is state of art and help create amazing algorithms with great results.