As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Beyond the syntax of each language (e.g. print v. echo), what are some key distinctive characteristics to look out for to distinguish a programming language?
As a beginner in programming, I'm still confused between the strengths and weaknesses of each programming language and how to distinguish them beyond their aliases for common native functions. I think it's much easier to classify languages based on a set of distinctive characterstics e.g. OOP v. Functional.
There are many thing that define a PL, here I'l list a few:
Is it procedural, OO, imperative?
Does it has strong type checking(C#, C++, Delphi) or dynamic(PHP, Pythong, JS)
How are references handled? (Does it hide pointers like C#?)
Does it require a runtime (C#, Java) or is it native to the OS(C, C++)
Does it support threads (E.g Eiffel needs extra libraries for it)
There are may others like the prescense of garbage collectors, the handling of params, etc. The Eiffel language has an interesting feature which is Design By Contract, I haven't seen this on any other language(I think C# 4.0 has it now), but it can be pretty useful if well used.
I would recommend you to take a look on Bertrand Meyer's work to get a deeper understanding on how PL's work and the things that define them. Another thing that can define a PL is the interaction level with the system, this what makes the difference between low-level languages and high-level languages.
Hope I can help
In a domain (imperative, functional, concatenative, term rewriting), sometimes its best to look at the presence or absence of any particular set of functionality. For example, for the main stream imperative.
First order functions
Closures
Built in classes, prototypical inheritance, or toolkit (Example: C++, Self/JavaScript, Lua/Perl)
Complex data types (more than array)
In-built concurrency primitives
Futures
Pass by values, pass by name, pass by reference or an combination thereof
Garbage collected or not? What kind?
Event-based
Interface based types, class based types, or no user types (Go, Java, Lua)
etc
You can consider things like:
Can you call functions?
Can you pass functions to other functions?
Can you create new functions? (In C you can pass function pointers to functions, but you cannot create new functions)
Can you create new data types?
Can you create new data types with functions that operate on them? (the typical basis for "OO" languages)
Can you execute code that was not available at compile-time (using an eval function, maybe)?
Must all types be known at compile-time?
Are types available at run-time?
The difference between low-level and high-level languages. (Even though "low" and "high" are relative terms.)
A high-level language will use an abstraction to hide details that low-level languages would expose to the user. For example, in Matlab or Python, you can initialize an N-dimensional array in a single command. Not so in C or assembly.
IMHO the strength of a language is given by how many things you can do with it; how fast and how easy can you accomplish the goals.
The weaknesses of a language are the sum of constraints (of various types) that you encounter while you try to achieve your goal.
There are many features that a programming language may support. Additionally these features aren't always mutually exclusive. For example OCaml and F# are both functional and object oriented. Also writing a list here of all the paradigms that a language can support would be exhaustive, however there is a book Programming Language Pragmatics that is a comprehensive treatment of many paradigms found in programming languages.
However, for me the important things I need to know when working with a language are the following:
Is it dynamically or statically typed
Is it a typed language, and if it is typed is strong or weak?
Is it garbage collected
Does it support pass by value or pass by reference semantics or both?
Does it support first order functions (i.e. can functions be treated as variables)
Is it object-oriented
Polymorphism. Is it parametric or ad-hoc.
How expressive is the type system (i.e. can I create non-leaky abstractions)
Overloaded methods
Generics (templates)
Exception handling.
Type system (typed vs untyped, statically vs dynamically typed, weakly and strongly typed).
Supported paradigms (procedural, object-oriented, functional, logic, multi).
Default implementation (compiler vs interpreter vs JIT-compiler).
Memory management (manual vs automatic (reference counting or GC)).
Intended domain of use (number crunching, prototyping, scripting, DSL, ...).
Generation (1GL, 2GL, 3GL, 4GL, 5GL).
Used natural language (English vs Non-English-based). However, it's about syntax.
General remark: many of this classification scheme are not comprehensive and are not that good. And links are mostly at Wikipedia. So be aware.
You can consider other characteristics such as:
Strong vs weak and static vs dynamic typing, support for generic typing
How memory is handled (is it abstracted or do you have direct control over your data, pass by ref vs pass by value)
Compiled vs interpreted vs a bit of both
The forms of user-defined types available... classes, structures, tuples, lists etc.
Whether threading facilities are inbuilt or you need to turn to external libraries
Facility for generative coding... C++ template metaprogramming is a form of this
In the case of OOP, single vs multi inheritance, interfaces, anonymous/inner classes etc.
Whether a language is multi-paradigm (i.e. C# and its support for functional programming)
Availability of reflection
The verbosity of a language or the amount of 'syntactic sugar'... e.g. C++ is quite verbose when it comes to iterating over a vector. Java is quite succinct when anonymous inner classes are used for event-handling. Python's list comprehensions save a lot of typing.
Related
For languages that appear in academic conferences like POPL or ICFP, often the language's semantics (in the form of operational or denotational semantics) are well specified. I was trying to find documented semantics for popular languages (e.g., C, Python, JavaScript) but couldn't find any.
When such languages with "heavy" (relatively heavy to languages designed as proofs of concept) features are being developed, do designers (or committee members) of those languages add features without specifying their semantics? And is that the case for most popular programming languages?
If so, I think it makes sense practically because not every person who wants to contribute to developing a language needs to be a PL researcher. But I was wondering about what kind of real-world trade-offs exist.
The semantics of some dynamic programming languages are emerging, because they are minimal in their syntactical core, and are mostly defined by their libraries (the language that is actually used for programming is much lager than what is defined by the syntax). Examples are:
LISP
PERL
TCL
Some languages are defined with so much syntactic ambiguity that the semantics end up being defined by the particular implementation. Examples:
Early C++
C++ with STL
AG Natural
Any programming language with macro capabilities, or that is normally used with a macro preprocessor ends up with semantics being redefined by the macros used (like in Domain Specific Languages). Dynamic languages that allow changes to the parsing behavior at runtime are also defined at runtime.
In object-oriented languages (and other languages that dispatch depending on the type of the objects) the semantics of an expression depend on the types of the objects involved, and those may depart largely from the semantics of equivalent expressions for built-in and standard types.
Almost all languages are defined usually with a normative notation such as BNF. This site has references to many.
Part of this is to remove ambiguities and ensure syntactic consistency. It would be difficult to build compilers or renderers without them.
Parts of this go into the design of HTML5.2 which explains some of the reasoning.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is a DSL and where should I use it?
I've heard the term used a lot... what exactly does it mean for a language to be "domain-specific"?
Also, what does it mean for a language (e.g. Groovy) to support domain-specific languages?
For your first question a bit of googling will be sufficient.
As for the second question: you can implement DSLs in any language. You can even implement eDSLs in almost any language. But some languages are much better in that than the others. The key feature is metaprogramming - an ability to generate code in your host language, which means you can plug in a compiler of your eDSL anywhere. Features which facilitate compiler construction are also useful - e.g., out of box parsing tools, extensible or just flexible syntax of the host language, algebraic data types for representing ASTs, pattern matching for simplifying compiler transformations, etc. There is a continuum of possibilities, with entirely static and unextensible languages on one side and absolutely flexible languages at the other side.
A "domain specific language" is one in which a class of problems (or solutions to problems) can be expressed succinctly, usually because the vocabulary aligns with the that of the problem domain, and the notation is similar (where possible) to that used by experts that work in the domain.
What this really means is a grammar representing what you can say, and a set of semantics that defines what those said things mean. This makes DSLs just like other conventional programming langauges (e.g., Java) in terms of how they are implemented. And in fact, you can think of such conventional languages as being "DSL"s that are good at describing procedural solutions to problems (but not necessary good at describing them). The implications are that you need the same set of machinery to process DSLs as you do to process conventional languages, and that's essentially compiler machinery.
Groovy has some of this machinery (by design) which is why it can "support" DSLs.
See Domain Specific Languages for a discussion about DSLs in general, and a particular kind of metaprogramming machinery that is very helpful for implementing them.
What qualifies a programming language to be called dynamic language? What sort of problems should I use a dynamic programming language to solve? What is the main difference between static programming languages and dynamic programming languages?
I don't think there is black and white here - there is a whole spectrum between dynamic and static.
Let's take two extreme examples for each side of the spectrum, and see where that takes us.
Haskell is an extreme in the static direction.
It has a powerful type system that is checked at compile time: If your program compiles it is free from common and not so common errors.
The compiled form is very different from the haskell program (it is a binary). Consequently runtime reflection and modification is hard, unless you have foreseen it. In comparison to interpreting the original, the result is potentially more efficient, as the compiler is free to do funky optimizations.
So for static languages I usually think: fairly lengthy compile-time analysis needed, type system will prevent me from making silly mistakes but also from doing some things that are actually valid, and if I want to do any manipulation of a program at runtime, it's going to be somewhat of a pain because the runtime representation of a program (i.e. its compiled form) is different from the actual language itself. Also it could be a pain to modify things later on if I have not foreseen it.
Clojure is an extreme in the dynamic direction.
It too has a type system, but at compile time there is no type checking. Many common errors can only be discovered by running the program.
Clojure programs are essentially just Clojure lists (the data structure) and can be manipulated as such. So when doing runtime reflection, you are actually processing a Clojure program more or less as you would type it - the runtime form is very close to the programming language itself. So you can basically do the same things at runtime as you could at "type time". Consequently, runtime performance may suffer because the compiler can't do many up-front optimizations.
For dynamic languages I usually think: short compilation step (basically just reading syntax), so fast and incremental development, practically no limits to what it will allow me to do, but won't prevent me from silly mistakes.
As other posts have indicated, other languages try to take more of a middle ground - e.g. static languages like F# and C# offer reflection capabilities through a separate API, and of course can offer incremental development by using clever tools like F#'s REPL. Dynamic languages sometimes offer optional typing (like Racket, Strongtalk), and generally, it seems, have more advanced testing frameworks to offset the lack of any sanity checking at compile time. Also type hints, while not checked at compile time, are useful hints to generate more efficient code (e.g. Clojure).
If you are looking to find the right tool for a given problem, then this is certainly one of the dimensions you can look at - but by itself is not likely to force a decision either way. Have a think about the other properties of the languages you are considering - is it a functional or OO or logic or ... language? Does it have a good framework for the things I need? Do I need stability and binary backwards compatibility, or can I live with some churn in the compiler? Do I need extensive tooling?Etc.
Dynamic language does many tasks at runtime where a static language would do them at compile-time.
The tasks in question are usually one or more of: type system, method dispatch and code generation.
Which also pretty much answers the questions about their usage.
There are a lot of different definitions in use, but one possible difference is:
A dynamic language typically uses dynamic typing.
A static language typically uses static typing.
Some languages are difficult to classify as either static or dynamically typed. For example, C# is traditionally regarded as a statically typed language, but C# 4.0 introduced a static type called dynamic which behaves in some ways more like a dynamic type than a static type.
What qualifies a programming language to be called dynamic language.
Dynamic languages are generally considered to be those that offer flexibility at run-time. Note that this does not necessarily conflict with static type systems. For example, F# was recently voted "favorite dynamic language on .NET" at a conference even though it is statically typed. Many people consider F# to be a dynamic language because it offers run-time features like meta-circular evaluation, a Read-Evaluate-Print-Loop (REPL) and dynamic typing (of sorts). Also, type inference means that F# code is not littered with type declarations like most statically typed languages (e.g. C, C++, Java, C# 2, Scala).
What are the problems for which I should go for dynamic language to solve.
In general, provided time and space are not of critical importance you probably always want to use languages with run-time flexibility and capabilities like run-time compilation.
This thread covers the issue pretty well:
Static/Dynamic vs Strong/Weak
The question is asked during Dynamic Languages Wizards Series - Panel on Language Design (at 24m 04s).
Answer from Jonathan Rees:
You know one when you see one
Answer from Guy Steele:
A dynamic language is one that defers as many decisions as possible to run time.
For example about array size, the number of data objects to allocate, decisions like that.
The concept is deferring until runtime, that's what I understand to be dynamic.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It seems I've got to agree with this post when it states that
[...] code in dynamically typed languages follows static-typing conventions
Much dynamic language code I encounter does indeed seem to be quite static (thinking of PHP) whereas dynamic approaches look somewhat clumsy or unnecessary instead.
Most of the time, it's just about omitting type signatures, which, in the context of type-inference/structural typing, doesn't even have to imply dynamic typing at all.
So my question (and it's not meant to be too subjective) is, in which dynamic languages or fields of application are all these more advanced dynamic language features (that couln't be replicated in static/compiled languages that easily) actually and idomatically used.
Examples:
Reflection
First-class continuations
Runtime object alteration/generation
Metaprogramming
Run-time code evaluation
Non-existent member behaviour
What are useful applications for such techniques?
Some examples of widespread application of the above techniques are:
Continuations make their appearance in web frameworks like Rails or Seaside. They can be used to allow an API to fake a local context. In Seaside or Rails this makes the API behave much more like a local GUI form handler than an HTTP request handler, which serves to simplify the task of coding the application's user interface elements. However, although many dynamic languages have strong support for continuations they are certainly not unique to this type of language.
Reflection is quite widely used for O/R mappers and serialisation, but many statically typed langages support reflection as well. On duck typed languages it can be used to find out at runtime if a facility is implemented by looking at the object's metadata. Some O/R mappers (and similar tools) work by implementing accesses to instance variables and redirecting the updates to a cached record in the data access layer. This helps to make the persistence relatively transparent to the developer as the field accesses look much like local variables.
Runtime object alteration is slightly useful (think monkey-patching) but mostly a gimmick. There aren't many really killer uses for it that come to mind immediately, but people certainly do use it. One possible use for it is fixing slightly broken behaviour when subclassing is not an option for some reason.
Metaprogramming is quite a fuzzy definition for a term, but arguably generics and C++ templates are an example of metaprogramming - taking place on statically typed languages. On languages with metaclass support, custom metaclasses can be used to implement particular behaviours such as singletons or object registries.Another metaprogramming example is Smalltalk's #notImplemented: method which is called on attempts to invoke nonexistent methods. The method name and parameters are supplied to the implementor of #notImplemented:, and can subsequently be used to construct a method invocation reflectively. Trapping this can be used (for example) to implement generic proxy mechanisms.
LISP programmers would argue that LISP is the most dynamic language of all due to its first class support for diddling directly with the parse trees of the code (known as 'macros'). This facility makes implementing DSLs trivial in LISP - and integrating them transparently into your code base.
All features you enumerate are also available in statically typed languages some with constraints.
Reflection: Present in Java, C# (not type safe).
First-class continuations: restricted support in Scala (maybe others)
Runtime object alteration: Changing the type of an object is supported in a restriced form in C# with extension methods (will be in Java 7) and implicit type conversions in Scala. Although open class is not supported most of the use cases are covered by type conversions.
Metaprogramming: I would say Metaprogramming is the heading for a lot of related features like reflection, type changes at runtime, AOP etc.
So there is not a lot left that is supported only by dynamic languages to discuss. Support for example for Reflection circumvents the type system but it is useful in certain situations where this kind of flexibility is needed. The same is true in dynamic languages.
The open class feature supported by Ruby is something that compiled languages will never support. It is the most flexible form of Metaprogramming possible (with all the implications: security, performance, maintainability.) You can change classes of the platform. It's used by Ruby on Rails to create methods of domain objects from metadata on the fly. In a statically typed language you have at least to create (or generate the code of) the interface of your domain object.
If you're looking for the "most dymanic languages" all homoiconic languages like LISP and Prolog are good candidates. Interestingly, C# is somewhat homoiconic with the expression trees in LINQ.
You should visit Douglas Crockford's Wrrrld Wide Web and see his wizardry over Javascript. Javascript is usually written in pretty straightforward and simple manner, like slightly simplified C. But it's only the surface. The unmutable keywords are a small percent of the language power. Most of it lies in objects and methods exported by the system, and these are fully mutable. You can replace/extend methods on the fly, you can replace pretty deeply rooted system methods, nest eval(), load generated <SCRIPT> on the fly, and so on. This is usable in writing all kinds of language extensions, frameworks, toolboxes and such. Instead of 200 lines of code of your program in straightforward Javascript, you write 50 lines that modify how Javascript work, and another 50 that use the new syntax to get the work done. You can generate whole pages on the fly, including JS embedded in them. You turn webpage structure into data storage. You replace frequently used methods of popular objects, and your own, to change their behavior on the fly, changing not only looks but also function of a webpage in one click.
It really feels like Javascript becomes a metalanguage to modify the Javascript engine, and make Javascript function like a different language, then you further modify it using the already modified, and your actual, final app takes a dozen of extremely intuitive lines getting the language do exactly what it needs. Oh, and patches the countless bugs and shortcomings of Javascript implementation on MSIE in the process.
I won't claim Lisp is the "most dynamic" (I'm not even sure what that means), but Lisp programmers frequently do things that are difficult-to-impossible in other languages:
create new control structures
create new syntax for existing constructs (I think every metaclass I've ever seen has its own defwhatever form)
extend the runtime (every .emacs is a runtime extension, e.g., what would it take to write calendar-mode for another editor?)
Yegge talks about it some here w.r.t. Emacs, e.g., parse XML by converting it to s-expressions, writing functions for the tags you want to process, and actually running it.
Ultimately it's not languages that write dynamic code, it's programmers; and there's going to be a learning curve to adjust your patterns to styles you're not used to. So what types of work can make best use of dynamic capabilities? The first that comes to my mind is middleware; interfaces among heterogeneous systems; especially those with imperfectly documented APIs or APIs that change a lot, and data serialization is dynamic.
I'd say anywhere you see REST and jason being applied, you're more likely to find dynamic code, for instance, where javascript, php, perl, ruby, ... are popular at least partially because they are capable of dynamic adaptation.
Also, there's a lot of javascript browser code that deals with browser version and brand incmpatiblities using dynamic techniques.
Yes i feel JavaScript as good one.
JavaScript is so flexible that people working on different languages have different variants of it for them. Like Microsoft has Ajax library which has typical .NET/C# type syntax. Also there are some JavaScript libraries which uses $ which looks similar like PHP syntaxes. Its all there because JavaScript is bueaty How many other languages one can tell which can facilitates something like this?
And one should know about the JavaScript closure feature which is state of art and help create amazing algorithms with great results.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What makes Scala such a wonderful language, other than the type system? Almost everything I read about the language brings out 'strong typing' as a big reason to use Scala, but there has to be more than that. What are some of the other compelling and/or cool language features that make Scala a really useful tool?
Here are some of the things that made me favour Scala (over, say, usual Java):
a) Type inference. The Java way of doing it:
Map<Something, List<SomethingElse>> list = new HashMap<Something, List<SomethingElse>>()
.. is rather verbose compared to Scala. The compiler should be able to figure it out if you give one of these lists.
b) First-order functions. Again, this functionality can be emulated with classes, but it's ugly.
c) Collections that have map and fold. These two tie in with (b), and also these two are something I wish for every time I have to write Java.
d) Pattern matching and case classes.
e) Variances, which mean that if S extends T, then List[S] extends List[T] as well.
Throw in some static types goodness as well, and I was sold on the language quite fast.
It's a mash up of the best bits from a bunch of languages, what's not to love:
Ruby's terse syntax
Java's performance
Erlang's Actor Support
Closures/Blocks
Convenient shorthand for maps & arrays
Scala is often paraded for having closures and implicits. Not surprising really, as lack of closures and explicit typing are perhaps the two biggest sources of Java boilerplate!
But once you examine it a little deeper, it goes far beyond Java-without-the-annoying bits, Perhaps the greatest strength of Scala is not one specific named feature, but how successful it is in unifying all of the features mentioned in other answers.
Post Functional
The union of object orientation and functional programming for example: Because functions are objects, Scala was able to make Maps implement the Function interface, so when you use a map to look up a value, it's no different syntactically from using a function to calculate a value. In unifying these paradigms so well, Scala truly is a post-functional language.
Or operator overloading, which is achieved by not actually having operators, they're just methods used in infix notation. So 1 + 2 is just calling the + method on an integer. If the method was named plus instead then you'd use it as 1 plus 2 which is no different from 1.plus(2). This is made possible because of another combination of features; everything in Scala is an object, there are no primitives, so integers can have methods.
Other Feature Fusion
Type classes were also mentioned, achieved by a combination of higher-kinded types, singleton objects, and implicits.
Other features that work well together are case classes and pattern matching, allowing you to easily build and deconstruct algebraic data types, without having to manually write all the tedious equality, hashcode, constructor and getter/setter logic that Java demands.
Specifying immutability by default, offering lazy values, and providing first class functions all combine to give you a language that's very suited to building efficient functional data structures.
The list goes on, but I've been using Scala for over 3 years now, and I'm still amazed almost daily at how well everything just works together.
Efficient and Versatile
Scala is also a small language, with a spec that (surprisingly!) only needs to be around 1/3 the size of Java's. This is partly because Java has a lot of special cases in the spec that Scala simplifies away, partly because of removing features such as primitives and operators, and partly because a lot of functionality has been moved from the language and into the libraries.
As a benefit of this, all the techniques available to the Scala library authors are also available to any Scala user, which makes it a great language for defining your own control-flow constructs and for building DSLs. This has been used to great effect in projects like Akka - a 3rd-party Actor framework.
Deep
Finally, it scales the full range of programming styles.
The runtime interpreter (known as the REPL) allows you to very quickly explore ideas in an interactive session, and Scala files can also be run as scripts without needing explicit compilation. When coupled with type inference, this gives Scala the feel of a dynamic language such as Ruby, Perl, or a bash script.
At the other end of the spectrum, traits, classes, objects and self-types allow you to build a full-scale enterprise system based on distinct components and using dependency injection without the need of 3rd-party tools. Scala also integrates with Java libraries at a level almost on-par with native Java, and by running on the JVM can take advantage of all the speed benefits offered on that platform, as well as being perfectly usable in containers such as tomcat, or with OSGi.
I'm new to Scala, but my impression is:
Really good JVM integration will be the driving factor. JRuby can call java and java can call JRuby code, but it's explicitly calling into another language, not the clean integration of Scala-Java. So you can use Java libraries, and even mix and match in the same project.
I started looking at scala when I had a realization that the thing which will drive the next great language is easy concurrency. The JVM has good concurrency from a performance standpoint. I'm sure someone will say that Erlang is better, but Scala is actually usable by normal programmers.
Where Java falls down is that it's just so painfully verbose. It takes way too many characters to create and pass a Functor. Scala allows passing functions as arguments.
It isn't possible in Java to create a union type, or to apply an interface to an existing class. These are both easy in Scala.
Static typing usually has a big penalty of verboseness. Scala eliminates this downside while still giving the upside of static typing, which is compile time type checking, and it makes code assist in editors easier.
The ability to extend the language. This has been the thing that has kept Lisp going for decades, and that allowed Ruby on Rails.
The type system really is Scala's most distinguishing feature. It also has a lot of syntactic conveniences over, say, Java.
But for me, the most compelling features of Scala are:
First-class modules.
Higher-kinded types (type constructor polymorphism).
Implicits.
In effect, these features let you approximate (and in some ways surpass) Haskell's type classes. Combined, they let you write exceptionally modular code.
Just shortly:
You get the power and platform-independency of the Java libraries, but without the boilerplate and verbosity.
You get the simplicity and productivity of Ruby, but with static typing and compiled bytecode.
You get the functional goodnesses and concurrency support of Haskell, but without complete paradigm shift and with the benefits of object-orientation.
What I find especially attractive in all of its magnificient features, among others:
Most of the object-oriented design patterns which require loads of boilerplate code in Java are supported natively, e.g. Singleton (via objects), Adapter, Decorator (via traits and implicits), Visitor (via pattern matching), Strategy (via closures) etc.
You can define your domain models and DSLs very concisely, then you can extend them with the necessary features (notification, association handling; parsing, serialization), without the need of code generation or frameworks.
And finally, there is full interoperability with the well-supported Java platform. You can mix Java and Scala in both directions. There is not much penalty nor compatibility problems when switching to Scala after having experienced the annoyances of Java which make code hard to maintain.
Functional Programming brought to JVM
Supposedly it's very easy to make Scala code run concurrently on multiple processors.
Expressiveness of control flow. For example, it's very common to have a collection of data which you need to process in some way. This might be a list of trades in which the processing involves grouping by some properties (the currencies of the investment instruments) and then doing a summation (to get totals-per-currency perhaps).
In Java this involves separating out a piece of code to do the grouping (a few lines of for-loop) and then another piece of code to do the summation (another for loop). In Scala, this type of thing is typically achievable in one line of code using functional programming and then folding, which reads very expressively l-to-r.
Of course, this is just an argument for a functional language over Java.
The great features of Scala has already been mentioned. One thing that shines through past all features though, is how tastefully everything is integrated.
Scala manages to be one of the most powerful language around without having a feeling of having bolted on features in haste. Neither are the language an academic exercise in proving a point. Innovation and really advanced concepts are brought in to the language with uncanny practicality and elegance.
In short: Martin Odersky is a pure design genius. That is what's so great about Scala!
I want to add the multi-paradigm (OO and FP) nature gives Scala an edge over other languages
Every day you code Java you will become more and more miserable, every day you code Scala you will become happier.
Here's a few fairly in depth explanations for the appeal of functional languages.
How/why do functional languages (specifically Erlang) scale well?
If we abandon feature discussion and will talk about style, i would say it's pipe-line style of coding. You start from some object or collection, types dot and property or dot and transformation and do it until you form desired result. This way it's easy to write a chain of transoformations that will be easy to read them also. Traits to some extend will also allow you to apply the same approach to constructing types.