Which Kotlin features are not available in statically compiled Groovy? [closed] - groovy

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Kotlin and Groovy look as very similar languages with very similar features if we compile Groovy statically. Which features, apart from null safety, Kotlin has that are missing in Groovy?

Kotlin is a JVM language, which IMO is trying to improve on Java in features and conciseness, while remaining imperative and static. Groovy has a similar concept except decided to go dynamic. As a result, a number of language features will be similar.
Here are some differences I'm aware of
Static vs Dynamic: Since Groovy was designed as a dynamic language, and #CompileStatic, while a great annotation (I use it a lot), was added later. Its feature feels a bit bolted on, and it does not enforce people to code in a static manner. It's not usable everywhere (e.g. my Spock tests seem to fail to compile with them). Sometimes even with it on Groovy still seems to have some odd dynamic behaviour every now and then. Kotlin is 100% Static, and dynamic is not an option.
There are a number of other features that is has though. I'd recommend you look at the reference, and you may spot a few more e.g. https://kotlinlang.org/docs/reference/
Data classes - concise with a copy function (a bit like case classes in Scala)
The null safety check you mentioned (which is a big pro)
The ability to destruct items. val (name, age) = person
Higher-Order Functions, defined like "fun doStuff(body: Int -> T)): T". Which are much better than the groovy Closures IMO. (very similar to Scala's)
Type checks and smart casts are nice: https://kotlinlang.org/docs/reference/typecasts.html
Companion Objects, in the same way Scala also tries to remove static methods from classes, Kotlin tries the same thing.
Sealed Classes to restrict inheritance (again Scala has something similar)
The "Nothing" subtype, where everything is a supertype of it. (another crucial concept in Scala).
when expressions for basic pattern matching: https://kotlinlang.org/docs/reference/control-flow.html
As you can see it does borrow from other languages other than Groovy. They have attempted to cherry pick a number of great features in an attempt to make a good language. Naturally Groovy has its own goodness. I've only focused one what Kotlin has and not visa-versa
Another plus is, being made by an IDE maker, the compiler is very quick and has great IDE support. Not saying Groovy does not have good support, but my current project does take a long time to compile, and refactor method always assumes you are coding in a dynamic fashion.
I'd recommend you try out the Koans to get a feel for them to see which features of the language you like and how it compares to groovy (https://github.com/Kotlin/kotlin-koans).

Kotlin designed as statically typed language, with great type system and other benefits of statically typed language. Groovy - in first place is a dynamically typed language, and only then - statically.
When you enable compile static in groovy you get just java with syntax sugar. On other side - Kotlin, in their type-system, have two types of references: nullable and nonnullable, so you can write code with less NPEs. If you are asking about only one feature - that's it.
Second great feature of Kotlin - it doesn't do any implicit conversions, on other hand - groovy implicitly converts double to bigdecimal and so on.
But kotlin has a lot other features, like smart casts, ADT (doc), type-safe builders, zero-cost abstractions and finally great IDE support.
Also i'm not sure about quality of Groovy's type-inference(in closures for example we need additional annotations, meh), but in Kotlin type-inference work's like a charm, without any annotations in every peace of language.
So statically typed compilation in Kotlin - first class citizen, in Groovy - not.

Related

Groovy and dynamic methods: need groovy veteran enlightment

First, I have to say that I really like Groovy and all the good stuff it is bringing to the Java dev world. But since I'm using it for more than little scripts, I have some concerns.
In this Groovy help page about dynamic vs static typing, there is this statement about the absence of compilation error/warning when you have typo in your code because it could be a call to a method added later at runtime:
It might be scary to do away with all of your static typing and
compile time checking at first. But many Groovy veterans will attest
that it makes the code cleaner, easier to refactor, and, well, more
dynamic.
I'm pretty agree with the 'more dynamic' part, but not with cleaner and easier to refactor:
For the other two statements I'm not sure: from my Groovy beginner perspective, this is resulting in less code, but in more difficult to read later and in more trouble to maintain (can not rely on the IDE anymore to find who is declaring a dynamic method and who is using one).
To clarify, I find that reading groovy code is very pleasant, I love the collection and closure (concise and expressive way of tackle complicated problem).
But I have a lot of trouble in these situations:
no more auto-completion inside 'builder' using Map (Of Map (of Map))
everywhere
confusing dynamic methods call (you don't know if it is a typo or a
dynamic name)
method extraction is more complicated inside closure (often resulting in code duplicate: 'it is only a small closure after all')
hard to guess closure parameters when you have to write one for a method of a subsystem
no more learning by browsing the code: you have to use text search instead
I can only saw some benefits with GORM, but in this case the dynamic method are wellknown and my IDE is aware of them (so it is more looking like a systematic code generation than dynamic method for me)
I would be very glad to learn from groovy veteran how they can attest of these benefits.
It does lead to different classes of bugs and processes. It also makes writing tests faster and more natural, helping to alleviate the bug issues.
Discovering where behavior is defined, and used, can be problematic. There isn't a great way around it, although IDEs are getting better at it over time.
Your code shouldn't be more difficult to read--mainline code should be easier to read. The dynamic behavior should disappear into the application, and be documented appropriately for developers that need to understand functionality at those levels.
Magic does make discovery more difficult. This implies that other means of documentation, particularly human-readable tests (think easyb, spock, etc.) and prose, become that much more important.
This is somewhat old, but i'd like to share my experience if someone comes looking for some thoughts on the topic:
Right now we are using eclipse 3.7 and groovy-eclipse 2.7 on a small team (3 developers) and since we don't have tests scripts, mostly of our groovy development we do by explicitly using types.
For example, when using service classes methods:
void validate(Product product) {
// groovy stuff
}
Box pack(List<Product> products) {
def box = new Box()
box.value = products.inject(0) { total, item ->
// some BigDecimal calculations =)
}
box
}
We usually fill out the type, which enable eclipse to autocomplete and, most important, allows us to refactor code, find usages, etc..
This blocks us from using metaprogramming, except for Categories which i found that are supported and is detected by groovy-eclipse.
Still, Groovy is pretty good and a LOT of our business logic is in groovy code.
We had two issues in production code when using groovy, and both cases were due bad manual testing.
We also have a lot of XML building and parsing, and we validate it before sending it to webservices and the likes.
There's a small script we use to connect to an internal system whose usage is very restricted (and not needed in other parts of the system). This code i developed using entirely dynamic typing, overriding methods using metaclass and all that stuff, but this is an exception.
I think groovy 2.0 (with groovy-eclipse coming along, of course) and it's #TypeChecked will be great for those of us that uses groovy as a "java++".
To me there are 2 types of refactoring:
IDE based refactoring (extract to method, rename method, introduce variable, etc.).
Manual refactoring. (moving a method to a different class, changing the return value of a method)
For IDE based refactoring I haven't found an IDE that does as good of a job with Groovy as it does with Java. For example in eclipse when you extract to method it looks for duplicate instances to refactor to call the method instead of having duplicated code. For Groovy, that doesn't seem to happen.
Manual refactoring is where I believe that you could see refactoring made easier. Without tests though I would agree that it is probably harder.
The statement at cleaner code is 100% accurate. I would venture a guess that good Java to good Groovy code is at least a 3:1 reduction in lines of code. Being a newbie at Groovy though I would strive to learn at least 1 new way to do something everyday. Something that greatly helped me improve my Groovy was to simply read the APIs. I feel that Collection, String, and List are probably the ones that have the most functionality and I used the most to help make my Groovy code actually Groovy.
http://groovy.codehaus.org/groovy-jdk/java/util/Collection.html
http://groovy.codehaus.org/groovy-jdk/java/lang/String.html
http://groovy.codehaus.org/groovy-jdk/java/util/List.html
Since you edited the question I'll edit my answer :)
One thing you can do is tell intellij about the dynamic methods on your objects: What does 'add dynamic method' do in Groovy/IntelliJ?. That might help a little bit.
Another trick that I use is to type my objects when doing the initial coding and remove the typing when I'm done. For example I can never seem to remember if it's .substring(..) or .subString(..) on a String. So if you type your object you get a little better code completion.
As for your other bullet points, I'd really need to look at some code to be able to give a better answer.

Are Groovy and Groovy++ two languages or one language?

Are Groovy 1.x (from http://groovy.codehaus.org) and Groovy++ (from http://code.google.com/p/groovypptest) two separate languages or are they two parts of just one language? Why or why not?
The answer to your question is provided on the Groovy++ website. It's the second sentence on the page you linked to!
Groovy++ is statically typed extension of Groovy programming language.
Groovy++ is an extension to Groovy. It builds on to Groovy, adding true static typing in some or all classes. The goal is performance and other improvements. Again, the page you link to has a complete description.
Recently, Groovy has really improved it's performance, and with the addition of Java 7's invokeDynamic, performance may soon be almost as fast as Groovy++/Java (the difference being mostly negligible).
On a software architectual point of view Groovy++ may be an extension of Groovy.
But if you look at language level: What happens if you use dynamic method invokation and annotate that class with #Typed? The compiler will complain about the unkown methods.
I think Groovy++ is a new language because a #Typed annotated Groovy Class does not allow dynamic calls like a not annotated Groovy Class. It changes the sematic of your code.
Groovy++ is a subset of Groovy.
Formally Groovy++ is just Groovy library. Groovy++ even have not any special syntax and uses Java-annotations.
But in fact Groovy++ is Groovy dialect (not new language of course).
#Peter Groovy++ forbids some Groovy libretys (as I have understood, by reasons of good code style).
But, Groovy++ provides many semantic extensions such as very complex types inference system, traits, extension methods, functional programming library, etc.
Because of this, I think what Groovy++ isn't just "subset of Groovy" as you have told.

What's so great about Scala? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What makes Scala such a wonderful language, other than the type system? Almost everything I read about the language brings out 'strong typing' as a big reason to use Scala, but there has to be more than that. What are some of the other compelling and/or cool language features that make Scala a really useful tool?
Here are some of the things that made me favour Scala (over, say, usual Java):
a) Type inference. The Java way of doing it:
Map<Something, List<SomethingElse>> list = new HashMap<Something, List<SomethingElse>>()
.. is rather verbose compared to Scala. The compiler should be able to figure it out if you give one of these lists.
b) First-order functions. Again, this functionality can be emulated with classes, but it's ugly.
c) Collections that have map and fold. These two tie in with (b), and also these two are something I wish for every time I have to write Java.
d) Pattern matching and case classes.
e) Variances, which mean that if S extends T, then List[S] extends List[T] as well.
Throw in some static types goodness as well, and I was sold on the language quite fast.
It's a mash up of the best bits from a bunch of languages, what's not to love:
Ruby's terse syntax
Java's performance
Erlang's Actor Support
Closures/Blocks
Convenient shorthand for maps & arrays
Scala is often paraded for having closures and implicits. Not surprising really, as lack of closures and explicit typing are perhaps the two biggest sources of Java boilerplate!
But once you examine it a little deeper, it goes far beyond Java-without-the-annoying bits, Perhaps the greatest strength of Scala is not one specific named feature, but how successful it is in unifying all of the features mentioned in other answers.
Post Functional
The union of object orientation and functional programming for example: Because functions are objects, Scala was able to make Maps implement the Function interface, so when you use a map to look up a value, it's no different syntactically from using a function to calculate a value. In unifying these paradigms so well, Scala truly is a post-functional language.
Or operator overloading, which is achieved by not actually having operators, they're just methods used in infix notation. So 1 + 2 is just calling the + method on an integer. If the method was named plus instead then you'd use it as 1 plus 2 which is no different from 1.plus(2). This is made possible because of another combination of features; everything in Scala is an object, there are no primitives, so integers can have methods.
Other Feature Fusion
Type classes were also mentioned, achieved by a combination of higher-kinded types, singleton objects, and implicits.
Other features that work well together are case classes and pattern matching, allowing you to easily build and deconstruct algebraic data types, without having to manually write all the tedious equality, hashcode, constructor and getter/setter logic that Java demands.
Specifying immutability by default, offering lazy values, and providing first class functions all combine to give you a language that's very suited to building efficient functional data structures.
The list goes on, but I've been using Scala for over 3 years now, and I'm still amazed almost daily at how well everything just works together.
Efficient and Versatile
Scala is also a small language, with a spec that (surprisingly!) only needs to be around 1/3 the size of Java's. This is partly because Java has a lot of special cases in the spec that Scala simplifies away, partly because of removing features such as primitives and operators, and partly because a lot of functionality has been moved from the language and into the libraries.
As a benefit of this, all the techniques available to the Scala library authors are also available to any Scala user, which makes it a great language for defining your own control-flow constructs and for building DSLs. This has been used to great effect in projects like Akka - a 3rd-party Actor framework.
Deep
Finally, it scales the full range of programming styles.
The runtime interpreter (known as the REPL) allows you to very quickly explore ideas in an interactive session, and Scala files can also be run as scripts without needing explicit compilation. When coupled with type inference, this gives Scala the feel of a dynamic language such as Ruby, Perl, or a bash script.
At the other end of the spectrum, traits, classes, objects and self-types allow you to build a full-scale enterprise system based on distinct components and using dependency injection without the need of 3rd-party tools. Scala also integrates with Java libraries at a level almost on-par with native Java, and by running on the JVM can take advantage of all the speed benefits offered on that platform, as well as being perfectly usable in containers such as tomcat, or with OSGi.
I'm new to Scala, but my impression is:
Really good JVM integration will be the driving factor. JRuby can call java and java can call JRuby code, but it's explicitly calling into another language, not the clean integration of Scala-Java. So you can use Java libraries, and even mix and match in the same project.
I started looking at scala when I had a realization that the thing which will drive the next great language is easy concurrency. The JVM has good concurrency from a performance standpoint. I'm sure someone will say that Erlang is better, but Scala is actually usable by normal programmers.
Where Java falls down is that it's just so painfully verbose. It takes way too many characters to create and pass a Functor. Scala allows passing functions as arguments.
It isn't possible in Java to create a union type, or to apply an interface to an existing class. These are both easy in Scala.
Static typing usually has a big penalty of verboseness. Scala eliminates this downside while still giving the upside of static typing, which is compile time type checking, and it makes code assist in editors easier.
The ability to extend the language. This has been the thing that has kept Lisp going for decades, and that allowed Ruby on Rails.
The type system really is Scala's most distinguishing feature. It also has a lot of syntactic conveniences over, say, Java.
But for me, the most compelling features of Scala are:
First-class modules.
Higher-kinded types (type constructor polymorphism).
Implicits.
In effect, these features let you approximate (and in some ways surpass) Haskell's type classes. Combined, they let you write exceptionally modular code.
Just shortly:
You get the power and platform-independency of the Java libraries, but without the boilerplate and verbosity.
You get the simplicity and productivity of Ruby, but with static typing and compiled bytecode.
You get the functional goodnesses and concurrency support of Haskell, but without complete paradigm shift and with the benefits of object-orientation.
What I find especially attractive in all of its magnificient features, among others:
Most of the object-oriented design patterns which require loads of boilerplate code in Java are supported natively, e.g. Singleton (via objects), Adapter, Decorator (via traits and implicits), Visitor (via pattern matching), Strategy (via closures) etc.
You can define your domain models and DSLs very concisely, then you can extend them with the necessary features (notification, association handling; parsing, serialization), without the need of code generation or frameworks.
And finally, there is full interoperability with the well-supported Java platform. You can mix Java and Scala in both directions. There is not much penalty nor compatibility problems when switching to Scala after having experienced the annoyances of Java which make code hard to maintain.
Functional Programming brought to JVM
Supposedly it's very easy to make Scala code run concurrently on multiple processors.
Expressiveness of control flow. For example, it's very common to have a collection of data which you need to process in some way. This might be a list of trades in which the processing involves grouping by some properties (the currencies of the investment instruments) and then doing a summation (to get totals-per-currency perhaps).
In Java this involves separating out a piece of code to do the grouping (a few lines of for-loop) and then another piece of code to do the summation (another for loop). In Scala, this type of thing is typically achievable in one line of code using functional programming and then folding, which reads very expressively l-to-r.
Of course, this is just an argument for a functional language over Java.
The great features of Scala has already been mentioned. One thing that shines through past all features though, is how tastefully everything is integrated.
Scala manages to be one of the most powerful language around without having a feeling of having bolted on features in haste. Neither are the language an academic exercise in proving a point. Innovation and really advanced concepts are brought in to the language with uncanny practicality and elegance.
In short: Martin Odersky is a pure design genius. That is what's so great about Scala!
I want to add the multi-paradigm (OO and FP) nature gives Scala an edge over other languages
Every day you code Java you will become more and more miserable, every day you code Scala you will become happier.
Here's a few fairly in depth explanations for the appeal of functional languages.
How/why do functional languages (specifically Erlang) scale well?
If we abandon feature discussion and will talk about style, i would say it's pipe-line style of coding. You start from some object or collection, types dot and property or dot and transformation and do it until you form desired result. This way it's easy to write a chain of transoformations that will be easy to read them also. Traits to some extend will also allow you to apply the same approach to constructing types.

What programming basics should I learn? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've had a very odd learning experience in programming. I was sort of taught C++, but I didn't get a lot out of it. Here's what I did get out of it: headers and variable declaration. And I tried to teach myself PHP, in which I learned a lot of. The problem is, a lot of my knowledge is widespread, random, and designed for specific situations.
So, my questions is: What basics are there to programming in most languages?
The term "basics" implies a short list, but to be an effective programmer you have to learn a LOT of concepts. Once you do learn them, though, you'll be able to apply many of the same concepts across languages.
I've compiled a (long!) list of concepts that are important in several, if not most, programming languages.
Language syntax
Keywords
Naming conventions
Operators
Assignment
Arithmetic
String
Other
Literals
Conditionals
If/else
Switch/case
What is considered true or false (0? Empty String? Null?)
Looping constructs
for
foreach/iteration
while
do-while
Exception handling
importing/including code from other files
Type system
Strong/weak
Static/dynamic
Memory management
Scoping
What scopes are available
How overlapping scopes are handled
Language constructs/program organization
Variables
Methods
Functions
Classes
Closures
Packages/Modules/Namespaces
Data types and data structures
Primitives
Objects
Arrays/Lists
Maps/Hash/Associative Array
Sets
Enum
Strings
String concatenation
String comparison and equality
Substring
Replacement
Mutability
Syntax for creating literal strings
Functions, Methods, Closures
Method/function overloading
Method/function overriding
Parameter passing (pass-by-value/pass-by-reference
Returning values (single return/multiple return)
Language type (not mutually exclusive)
Scripting
Procedural
Functional
Object-oriented
Object-oriented principles
Inheritance
Classical vs Prototypical
Single, Multiple, or something else
Classes
Static variables/global variables
access modifiers (private, public, protected)
API (or how to do basic stuff)
Basic I/O
Print to Standard Out
Read from Standard in
File I/O
Read a file
Write a file
Check file attributes
Use of regular expressions
Referencing environment variables
Executing system commands
Threading model
Create threads
Thread-safety
Synchronization primitives
Templating
Another important thing not mentioned here yet is just Object Oriented Programming. The ideas revolving around classes, inheritence, interfaces, etc.
A very important basic programming skill is the ability to think at many different levels of abstraction and to know when and which level of abstraction is the most appropriate for a particular programming task.
Pointers. Because so few people actually understand them.
Recursion and iteration, plus what the difference is, and when you use them.
Get an algorithms book and work through the exercises -- you won't be disappointed.
Testing! (unit testing, integration testing, fixtures, mock objects, ...)
And not a programming skill, but surely a development skill: using revision control, and learning to commit sets of changes that handle one (or a few related) requirement, or bugfix, and will always result in a source tree that compiles without errors. This will teach you to organize your work :-)
And last but not least: English... :-) Again, this is not a programming skill, and I know some may disagree, but I feel that any programming language that uses English keywords, should also be programmed in English. So: use English variable names, and so on. I'd even say that the code comments should be in English, but I am sure even more people would disagree about that... So: learn how others describe their code, and adhere to that.
If I were you, I'd go back and learn the C programming language from the class K&R book.
Find out what sort of thing you want to program for first - e.g. web, PC applications, Java based applications, mobile devices, reports, system interfaces, business to business interfaces, etc. then go from there.

What do people find so appealing about dynamic languages? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
It seems that everybody is jumping on the dynamic, non-compiled bandwagon lately. I've mostly only worked in compiled, static typed languages (C, Java, .Net). The experience I have with dynamic languages is stuff like ASP (Vb Script), JavaScript, and PHP. Using these technologies has left a bad taste in my mouth when thinking about dynamic languages. Things that usually would have been caught by the compiler such as misspelled variable names and assigning an value of the wrong type to a variable don't occur until runtime. And even then, you may not notice an error, as it just creates a new variable, and assigns some default value. I've also never seen intellisense work well in a dynamic language, since, well, variables don't have any explicit type.
What I want to know is, what people find so appealing about dynamic languages? What are the main advantages in terms of things that dynamic languages allow you to do that can't be done, or are difficult to do in compiled languages. It seems to me that we decided a long time ago, that things like uncompiled asp pages throwing runtime exceptions was a bad idea. Why is there is a resurgence of this type of code? And why does it seem to me at least, that Ruby on Rails doesn't really look like anything you couldn't have done with ASP 10 years ago?
I think the reason is that people are used to statically typed languages that have very limited and inexpressive type systems. These are languages like Java, C++, Pascal, etc. Instead of going in the direction of more expressive type systems and better type inference, (as in Haskell, for example, and even SQL to some extent), some people like to just keep all the "type" information in their head (and in their tests) and do away with static typechecking altogether.
What this buys you in the end is unclear. There are many misconceived notions about typechecking, the ones I most commonly come across are these two.
Fallacy: Dynamic languages are less verbose. The misconception is that type information equals type annotation. This is totally untrue. We all know that type annotation is annoying. The machine should be able to figure that stuff out. And in fact, it does in modern compilers. Here is a statically typed QuickSort in two lines of Haskell (from haskell.org):
qsort [] = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)
And here is a dynamically typed QuickSort in LISP (from swisspig.net):
(defun quicksort (lis) (if (null lis) nil
(let* ((x (car lis)) (r (cdr lis)) (fn (lambda (a) (< a x))))
(append (quicksort (remove-if-not fn r)) (list x)
(quicksort (remove-if fn r))))))
The Haskell example falsifies the hypothesis statically typed, therefore verbose. The LISP example falsifies the hypothesis verbose, therefore statically typed. There is no implication in either direction between typing and verbosity. You can safely put that out of your mind.
Fallacy: Statically typed languages have to be compiled, not interpreted. Again, not true. Many statically typed languages have interpreters. There's the Scala interpreter, The GHCi and Hugs interpreters for Haskell, and of course SQL has been both statically typed and interpreted for longer than I've been alive.
You know, maybe the dynamic crowd just wants freedom to not have to think as carefully about what they're doing. The software might not be correct or robust, but maybe it doesn't have to be.
Personally, I think that those who would give up type safety to purchase a little temporary liberty, deserve neither liberty nor type safety.
Don't forget that you need to write 10x code coverage in unit tests to replace what your compiler does :D
I've been there, done that with dynamic languages, and I see absolutely no advantage.
When reading other people's responses, it seems that there are more or less three arguments for dynamic languages:
1) The code is less verbose.
I don't find this valid. Some dynamic languages are less verbose than some static ones. But F# is statically typed, but the static typing there does not add much, if any, code. It is implicitly typed, though, but that is a different thing.
2) "My favorite dynamic language X has my favorite functional feature Y, so therefore dynamic is better". Don't mix up functional and dynamic (I can't understand why this has to be said).
3) In dynamic languages you can see your results immediately. News: You can do that with C# in Visual Studio (since 2005) too. Just set a breakpoint, run the program in the debugger and modify the program while debbuging. I do this all the time and it works perfectly.
Myself, I'm a strong advocate for static typing, for one primary reason: maintainability. I have a system with a couple 10k lines of JavaScript in it, and any refactoring I want to do will take like half a day since the (non-existent) compiler will not tell me what that variable renaming messed up. And that's code I wrote myself, IMO well structured, too. I wouldn't want the task of being put in charge of an equivalent dynamic system that someone else wrote.
I guess I will be massively downvoted for this, but I'll take the chance.
VBScript sucks, unless you're comparing it to another flavor of VB.
PHP is ok, so long as you keep in mind that it's an overgrown templating language.
Modern Javascript is great. Really. Tons of fun. Just stay away from any scripts tagged "DHTML".
I've never used a language that didn't allow runtime errors. IMHO, that's largely a red-herring: compilers don't catch all typos, nor do they validate intent. Explicit typing is great when you need explicit types, but most of the time, you don't. Search for the questions here on generics or the one about whether or not using unsigned types was a good choice for index variables - much of the time, this stuff just gets in the way, and gives folks knobs to twiddle when they have time on their hands.
But, i haven't really answered your question. Why are dynamic languages appealing? Because after a while, writing code gets dull and you just want to implement the algorithm. You've already sat and worked it all out in pen, diagrammed potential problem scenarios and proved them solvable, and the only thing left to do is code up the twenty lines of implementation... and two hundred lines of boilerplate to make it compile. Then you realize that the type system you work with doesn't reflect what you're actually doing, but someone else's ultra-abstract idea of what you might be doing, and you've long ago abandoned programming for a life of knicknack tweaking so obsessive-compulsive that it would shame even fictional detective Adrian Monk.
That's when you go get plastered start looking seriously at dynamic languages.
I am a full-time .Net programmer fully entrenched in the throes of statically-typed C#. However, I love modern JavaScript.
Generally speaking, I think dynamic languages allow you to express your intent more succinctly than statically typed languages as you spend less time and space defining what the building blocks are of what you are trying to express when in many cases they are self evident.
I think there are multiple classes of dynamic languages, too. I have no desire to go back to writing classic ASP pages in VBScript. To be useful, I think a dynamic language needs to support some sort of collection, list or associative construct at its core so that objects (or what pass for objects) can be expressed and allow you to build more complex constructs. (Maybe we should all just code in LISP ... it's a joke ...)
I think in .Net circles, dynamic languages get a bad rap because they are associated with VBScript and/or JavaScript. VBScript is just a recalled as a nightmare for many of the reasons Kibbee stated -- anybody remember enforcing type in VBScript using CLng to make sure you got enough bits for a 32-bit integer. Also, I think JavaScript is still viewed as the browser language for drop-down menus that is written a different way for all browsers. In that case, the issue is not language, but the various browser object models. What's interesting is that the more C# matures, the more dynamic it starts to look. I love Lambda expressions, anonymous objects and type inference. It feels more like JavaScript everyday.
Here is a statically typed QuickSort in two lines of Haskell (from haskell.org):
qsort [] = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)
And here is a dynamically typed QuickSort in LISP (from swisspig.net):
(defun quicksort (lis) (if (null lis) nil
(let* ((x (car lis)) (r (cdr lis)) (fn (lambda (a) (< a x))))
(append (quicksort (remove-if-not fn r)) (list x)
(quicksort (remove-if fn r))))))
I think you're biasing things with your choice of language here. Lisp is notoriously paren-heavy. A closer equivelent to Haskell would be Python.
if len(L) <= 1: return L
return qsort([lt for lt in L[1:] if lt < L[0]]) + [L[0]] + qsort([ge for ge in L[1:] if ge >= L[0]])
Python code from here
For me, the advantage of dynamic languages is how much more readable the code becomes due to less code and functional techniques like Ruby's block and Python's list comprehension.
But then I kind of miss the compile time checking (typo does happen) and IDE auto complete. Overall, the lesser amount of code and readability pays off for me.
Another advantage is the usually interpreted/non compiled nature of the language. Change some code and see the result immediately. It's really a time saver during development.
Last but not least, I like the fact that you can fire up a console and try out something you're not sure of, like a class or method that you've never used before and see how it behaves. There are many uses for the console and I'll just leave that for you to figure out.
Your arguments against dynamic languages are perfectly valid. However, consider the following:
Dynamic languages don't need to be compiled: just run them. You can even reload the files at run time without restarting the application in most cases.
Dynamic languages are generally less verbose and more readable: have you ever looked at a given algorithm or program implemented in a static language, then compared it to the Ruby or Python equivalent? In general, you're looking at a reduction in lines of code by a factor of 3. A lot of scaffolding code is unnecessary in dynamic languages, and that means the end result is more readable and more focused on the actual problem at hand.
Don't worry about typing issues: the general approach when programming in dynamic languages is not to worry about typing: most of the time, the right kind of argument will be passed to your methods. And once in a while, someone may use a different kind of argument that just happens to work as well. When things go wrong, your program may be stopped, but this rarely happens if you've done a few tests.
I too found it a bit scary to step away from the safe world of static typing at first, but for me the advantages by far outweigh the disadvantages, and I've never looked back.
I believe that the "new found love" for dynamically-typed languages have less to do with whether statically-typed languages are better or worst - in the absolute sense - than the rise in popularity of certain dynamic languages. Ruby on Rails was obviously a big phenomenon that cause the resurgence of dynamic languages. The thing that made rails so popular and created so many converts from the static camp was mainly: very terse and DRY code and configuration. This is especially true when compared to Java web frameworks which required mountains of XML configuration. Many Java programmers - smart ones too - converted over, and some even evangelized ruby and other dynamic languages. For me, three distinct features allow dynamic languages like Ruby or Python to be more terse:
Minimalist syntax - the big one is that type annotations are not required, but also the the language designer designed the language from the start to be terse
inline function syntax(or the lambda) - the ability to write inline functions and pass them around as variables makes many kinds of code more brief. In particular this is true for list/array operations. The roots of this ideas was obviously - LISP.
Metaprogramming - metaprogramming is a big part of what makes rails tick. It gave rise to a new way of refactoring code that allowed the client code of your library to be much more succinct. This also originate from LISP.
All three of these features are not exclusive to dynamic languages, but they certainly are not present in the popular static languages of today: Java and C#. You might argue C# has #2 in delegates, but I would argue that it's not widely used at all - such as with list operations.
As for more advanced static languages... Haskell is a wonderful language, it has #1 and #2, and although it doesn't have #3, it's type system is so flexible that you will probably not find the lack of meta to be limiting. I believe you can do metaprogramming in OCaml at compile time with a language extension. Scala is a very recent addition and is very promising. F# for the .NET camp. But, users of these languages are in the minority, and so they didn't really contribute to this change in the programming languages landscape. In fact, I very much believe the popularity of Ruby affected the popularity of languages like Haskell, OCaml, Scala, and F# in a positive way, in addition to the other dynamic languages.
Personally, I think it's just that most of the "dynamic" languages you have used just happen to be poor examples of languages in general.
I am way more productive in Python than in C or Java, and not just because you have to do the edit-compile-link-run dance. I'm getting more productive in Objective-C, but that's probably more due to the framework.
Needless to say, I am more productive in any of these languages than PHP. Hell, I'd rather code in Scheme or Prolog than PHP. (But lately I've actually been doing more Prolog than anything else, so take that with a grain of salt!)
My appreciation for dynamic languages is very much tied to how functional they are. Python's list comprehensions, Ruby's closures, and JavaScript's prototyped objects are all very appealing facets of those languages. All also feature first-class functions--something I can't see living without ever again.
I wouldn't categorize PHP and VB (script) in the same way. To me, those are mostly imperative languages with all of the dynamic-typing drawbacks that you suggest.
Sure, you don't get the same level of compile-time checks (since there ain't a compile time), but I would expect static syntax-checking tools to evolve over time to at least partially address that issue.
One of the advantages pointed out for dynamic languages is to just be able to change the code and continue running. No need to recompile. In VS.Net 2008, when debugging, you can actually change the code, and continue running, without a recompile. With advances in compilers and IDEs, is it possible that this and other advantages of using dynamic languages will go away.
Ah, I didn't see this topic when I posted similar question
Aside from good features the rest of the folks mentioned here about dynamic languages, I think everybody forget one, the most basic thing: metaprogramming.
Programming the program.
Its pretty hard to do in compiled languages, generally, take for example .Net. To make it work you have to make all kind of mambo jumbo and it usualy ends with code that runs around 100 times slower.
Most dynamic languages have a way to do metaprogramming and that is something that keeps me there - ability to create any kind of code in memory and perfectly integrate it into my applicaiton.
For instance to create calculator in Lua, all I have to do is:
print( loadstring( "return " .. io.read() )() )
Now, try to do that in .Net.
My main reason for liking dynamic (typed, since that seems to be the focus of the thread) languages is that the ones I've used (in a work environment) are far superior to the non-dynamic languages I've used. C, C++, Java, etc... they're all horrible languages for getting actual work done in. I'd love to see an implicitly typed language that's as natural to program in as many of the dynamically typed ones.
That being said, there's certain constructs that are just amazing in dynamically typed languages. For example, in Tcl
lindex $mylist end-2
The fact that you pass in "end-2" to indicate the index you want is incredibly concise and obvious to the reader. I have yet to see a statically typed language that accomplishes such.
I think this kind of argument is a bit stupid: "Things that usually would have been caught by the compiler such as misspelled variable names and assigning an value of the wrong type to a variable don't occur until runtime" yes thats right as a PHP developer I don't see things like mistyped variables until runtime, BUT runtime is step 2 for me, in C++ (Which is the only compiled language I have any experience) it is step 3, after linking, and compiling.
Not to mention that it takes all of a few seconds after I hit save to when my code is ready to run, unlike in compiled languages where it can take literally hours. I'm sorry if this sounds a bit angry, but I'm kind of tired of people treating me as a second rate programmer because I don't have to compile my code.
The argument is more complex than this (read Yegge's article "Is Weak Typing Strong Enough" for an interesting overview).
Dynamic languages don't necessarily lack error checking either - C#'s type inference is possibly one example. In the same way, C and C++ have terrible compile checks and they are statically typed.
The main advantages of dynamic languages are a) capability (which doesn't necessarily have to be used all the time) and b) Boyd's Law of Iteration.
The latter reason is massive.
Although I'm not a big fan of Ruby yet, I find dynamic languages to be really wonderful and powerful tools.
The idea that there is no type checking and variable declaration is not too big an issue really. Admittedly, you can't catch these errors until run time, but for experienced developers this is not really an issue, and when you do make mistakes, they're usually easily fixed.
It also forces novices to read what they're writing more carefully. I know learning PHP taught me to be more attentive to what I was actually typing, which has improved my programming even in compiled languages.
Good IDEs will give enough intellisense for you to know whether a variable has been "declared" and they also try to do some type inference for you so that you can tell what a variable is.
The power of what can be done with dynamic languages is really what makes them so much fun to work with in my opinion. Sure, you could do the same things in a compiled language, but it would take more code. Languages like Python and PHP let you develop in less time and get a functional codebase faster most of the time.
And for the record, I'm a full-time .NET developer, and I love compiled languages. I only use dynamic languages in my free time to learn more about them and better myself as a developer..
I think that we need the different types of languages depending on what we are trying to achieve, or solve with them. If we want an application that creates, retrieves, updates and deletes records from the database over the internet, we are better off doing it with one line of ROR code (using the scaffold) than writing it from scratch in a statically typed language. Using dynamic languages frees up the minds from wondering about
which variable has which type
how to grow a string dynamically as needs be
how to write code so that if i change type of one variable, i dont have to rewrite all the function that interact with it
to problems that are closer to business needs like
data is saving/updating etc in the database, how do i use it to drive traffic to my site
Anyway, one advantage of loosely typed languages is that we dont really care what type it is, if it behaves like what it is supposed to. That is the reason we have duck-typing in dynamically typed languages. it is a great feature and i can use the same variable names to store different types of data as the need arises. also, statically typed languages force you to think like a machine (how does the compiler interact with your code, etc etc) whereas dynamically typed languages, especially ruby/ror, force the machine to think like a human.
These are some of the arguments i use to justify my job and experience in dynamic languages!
I think both styles have their strengths. This either/or thinking is kind of crippling to our community in my opinion. I've worked in architectures that were statically-typed from top to bottom and it was fine. My favorite architecture is for dynamically-typed at the UI level and statically-typed at the functional level. This also encourages a language barrier that enforces the separation of UI and function.
To be a cynic, it may be simply that dynamic languages allow the developer to be lazier and to get things done knowing less about the fundamentals of computing. Whether this is a good or bad thing is up to the reader :)
FWIW, Compiling on most applications shouldn't take hours. I have worked with applications that are between 200-500k lines that take minutes to compile. Certainly not hours.
I prefer compiled languages myself. I feel as though the debugging tools (in my experience, which might not be true for everything) are better and the IDE tools are better.
I like being able to attach my Visual Studio to a running process. Can other IDEs do that? Maybe, but I don't know about them. I have been doing some PHP development work lately and to be honest it isn't all that bad. However, I much prefer C# and the VS IDE. I feel like I work faster and debug problems faster.
So maybe it is more a toolset thing for me than the dynamic/static language issue?
One last comment... if you are developing with a local server saving is faster than compiling, but often times I don't have access to everything on my local machine. Databases and fileshares live elsewhere. It is easier to FTP to the web server and then run my PHP code only to find the error and have to fix and re-ftp.
Productivity in a certain context. But that is just one environment I know, compared to some others I know or have seen used.
Smalltalk on Squeak/Pharo with Seaside is a much more effective and efficient web platform than ASP.Net(/MVC), RoR or Wicket, for complex applications. Until you need to interface with something that has libraries in one of those but not smalltalk.
Misspelled variable names are red in the IDE, IntelliSense works but is not as specific. Run-time errors on webpages are not an issue but a feature, one click to bring up the debugger, one click to my IDE, fix the bug in the debugger, save, continue. For simple bugs, the round-trip time for this cycle is less than 20 seconds.
Dynamic Languages Strike Back
http://www.youtube.com/watch?v=tz-Bb-D6teE
A talk discussing Dynamic Languages, what some of the positives are, and how many of the negatives aren't really true.
Because I consider stupid having to declare the type of the box.
The type stays with the entity, not with the container. Static typing had a sense when the type of the box had a direct consequence on how the bits in memory were interpreted.
If you take a look at the design patterns in the GoF, you will realize that a good part of them are there just to fight with the static nature of the language, and they have no reason whatsoever to exist in a dynamic language.
Also, I'm tired of having to write stuff like MyFancyObjectInterface f = new MyFancyObject(). DRY principle anyone ?
Put yourself in the place of a brand new programmer selecting a language to start out with, who doesn't care about dynamic versus staic versus lambdas versus this versus that etc.; which language would YOU choose?
C#
using System;
class MyProgram
{
public static void Main(string[] args)
{
foreach (string s in args)
{
Console.WriteLine(s);
}
}
}
Lua:
function printStuff(args)
for key,value in pairs(args) do
print value .. " "
end
end
strings = {
"hello",
"world",
"from lua"
}
printStuff(strings)
This all comes down to partially what's appropriate for the particular goals and what's a common personal preference. (E.G. Is this going to be a huge code base maintained by more people than can conduct a reasonable meeting together? You want type checking.)
The personal part is about trading off some checks and other steps for development and testing speed (while likely giving up some cpu performance). There's some people for which this is liberating and a performance boost, and there's some for which this is quite the opposite, and yes it does sort of depend on the particular flavor of your language too. I mean no one here is saying Java rocks for speedy, terse development, or that PHP is a solid language where you'll rarely make a hard to spot typo.
I have love for both static and dynamic languages. Every project that I've been involved in since about 2002 has been a C/C++ application with an embedded Python interpret. This gives me the best of both worlds:
The components and frameworks that make up the application are, for a given release of an application, immutable. They must also be very stable, and hence, well tested. A Statically typed language is the right choice for building these parts.
The wiring up of components, loading of component DLLs, artwork, most of the GUI, etc... can vary greatly (say, to customise the application for a client) with no need to change any framework or components code. A dynamic language is perfect for this.
I find that the mix of a statically typed language to build the system and a dynamically type language to configure it gives me flexibility, stability and productivity.
To answer the question of "What's with the love of dynamic languages?" For me it's the ability to completely re-wire a system at runtime in any way imaginable. I see the scripting language as "running the show", therefore the executing application may do anything you desire.
I don't have much experience with dynamic languages in general, but the one dynamic language I do know, JavaScript(aka ECMAScript), I absolutely love.
Well, wait, what's the discussion here? Dynamic compilation? Or dynamic typing? JavaScript covers both bases so I guess I'll talk about both:
Dynamic compilation:
To begin, dynamic languages are compiled, the compilation is simply put off until later. And Java and .NET really are compiled twice. Once to their respective intermediate languages, and again, dynamically, to machine code.
But when compilation is put off you can see results faster. That's one advantage. I do enjoy simply saving the file and seeing my program in action fairly quick.
Another advantage is that you can write and compile code at runtime. Whether this is possible in statically compiled code, I don't know. I imagine it must be, since whatever compiles JavaScript is ultimately machine code and statically compiled. But in a dynamic language this is a trivial thing to do. Code can write and run itself. (And I'm pretty sure .NET can do this, but the CIL that .NET compiles to is dynamically compiled on the fly anyways, and it's not so trivial in C#)
Dynamic typing:
I think dynamic typing is more expressive than static typing. Note that I'm using the term expressive informally to say that dynamic typing can say more with less. Here's some JavaScript code:
var Person = {};
Do you know what Person is now? It's a generic dictionary. I can do this:
Person["First_Name"] = "John";
Person["Last_Name"] = "Smith";
But it's also an object. I could refer to any of those "keys" like this:
Person.First_Name
And add any methods I deem necessary:
Person.changeFirstName = function(newName) {
this.First_Name = newName;
};
Sure, there might be problems if newName isn't a string. It won't be caught right away, if ever, but you can check yourself. It's a matter of trading expressive power and flexibility for safety. I don't mind adding code to check types, etc, myself, and I've yet to run into a type bug that gave me much grief (and I know that isn't saying much. It could be a matter of time :) ). I very much enjoy, however, that ability to adapt on the fly.
Nice blog post on the same topic: Python Makes Me Nervous
Method signatures are virtually
useless in Python. In Java, static
typing makes the method signature into
a recipe: it's all the shit you need
to make this method work. Not so in
Python. Here, a method signature will
only tell you one thing: how many
arguments you need to make it work.
Sometimes, it won't even do that, if
you start fucking around with
**kwargs.
Because it's fun fun fun. It's fun to not worry about memory allocation, for one. It's fun not waiting for compilation. etc etc etc
Weakly typed languages allow flexibility in how you manage your data.
I used VHDL last spring for several classes, and I like their method of representing bits/bytes, and how the compiler catches errors if you try to assign a 6-bit bus to a 9-bit bus. I tried to recreate it in C++, and I'm having a fair struggle to neatly get the typing to work smoothly with existing types. Steve Yegge does a very nice job of describing the issues involved with strong type systems, I think.
Regarding verbosity: I find Java and C# to be quite verbose in the large(let's not cherry-pick small algorithms to "prove" a point). And, yes, I've written in both. C++ struggles in the same area as well; VHDL succumbs here.
Parsimony appears to be a virtue of the dynamic languages in general(I present Perl and F# as examples).

Resources