Tool to analyse programming languages used in a project - statistics

The only tool I can find so far is https://github.com/github/linguist
I am sure this tool works fine in some situations. I get a lot of false positives and it misses some languages. A project consisting of some C, C++, ObjC and it claims to see C, C++, D, Fortran, Shell, Python. If I analyze the ObjC directory all by itself then it works fine but misses all the other languages. Very flakey.
So does anyone have a good tool to generate some basic stats on which languages are being used in a very large project?
UPDATE:
Tried https://github.com/blackducksw/ohcount/
Similar story to linguist but provided a ton more details. A bit hit and miss at times though.

Give SLOCCount a try. It can identify numerous programming languages and does some interesting statistics (such as estimated development costs).

Related

Choose splatform-independent language for math-intensive computations

I am planning on making a flame fractal engine (no GUI), similar to flam3,
but I need to chose a language. I have one such engine made in Java already, but it is too slow.
What I seek is a compiled language, that can be compiled to both windows and Linux (hence no .NET, GPU language), and preferably with garbage collecting and object-orientation (so no C, and C++).
What languages out there, except D, satisfy these conditions?
Any other thoughts on this?
EDIT: I am more after what similar alternatives there is to D, since D is still in development.
I will probably choose D, since it has native support for complex numbers, among other things.
However, C++ is more accessible to other developers, but it is a hard language, in my opinion.
This will undoubtedly start a new flamewar until someone closes the question. My personal opinion is you should use C++, where the missing garbage collection is something you should accept, since manual memory management offers you more performance tuning options.
Google announced a research paper yesterday about a Java / C++ / Go / Scala comparison, you may find it helpful:
https://days2011.scala-lang.org/sites/days2011/files/ws3-1-Hundt.pdf
You made yourself a very tough task (and asked an interesting question, by the way).
The only language that springs to my mind is Ada (it has an optional garbage collector, as discussed in this thread, and according to this book it support object-orientation). This Ubuntu comparison shows that Ada 2005 is quite fast, generally faster in benchmarks than Java, but slower than C/C++.
Disclaimer: I do not claim that Ada is superior to any other language. In fact, I have not used it in any reasonable application. I believe using C++ will produce faster code; moreover, probably the effort required to manage the memory manually in C++ is worth the speed improvement, but I am not an expert in this. This is not to start a flamewar (as #Doc pointed out, it may happen); just my opinion on the topic.
I decided to go for D, since it is closest to what I want.
I was merely curious what other languages that were comparable to D.

How to go about making your own programming language? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Learning to write a compiler
I looked around trying to find out more about programming language development, but couldn't find a whole lot online. I have found some tutorial videos, but not much for text guides, FAQs, advice etc. I am really curious about how to build my own programming language. It brings me to SO to ask:
How can you go about making your own programming language?
I would like to build a very basic language. I don't plan on having a very good language, nor do I think it will be used by anyone. I simply want to make my own language to learn more about operating systems, programming, and become better at everything.
Where does one start? Building the syntax? Building a compiler? What skills are needed? A lot of assembly and understanding of the operating system? What languages are most compilers and languages built in? I assume C.
I'd say that before you begin you might want to take a look at the Dragon Book and/or Programming Language Pragmatics. That will ground you in the theory of programming languages. The books cover compilation, and interpretation, and will enable you to build all the tools that would be needed to make a basic programming language.
I don't know how much assembly language you know, but unless you're rather comfortable with some dialect of assembly language programming I'd advise you against trying to write a compiler that compiles down to assembly code, as it's quite a bit of a challenge. You mentioned earlier that you're familiar wtih both C and C++, so perhaps you can write a compiler that compiles down to C or C++ and then use gcc/g++ or any other C/C++ compiler to convert the code to a native executable. This is what the Vala programming language does (it converts Vala syntax to C code that uses the GObject library).
As for what you can use to write the compiler, you have a lot of options. You could write it by hand in C or C++, or in order to simplify development you could use a higher level language so that you can focus on the writing of the compiler more than the memory allocations and the such that are needed for working with strings in C.
You could simply generate the grammars and have Flex and Bison generate the parser and lexical analyser. This is really useful as it allows you to do iterative development to quickly work on getting a working compiler.
Another option you have is to use ANTLR to generate your parser, the advantage to this is that you get lots of target languages that ANTLR can compile to. I've never used this but I've heard a lot about it.
Furthermore if you'd like a better grounding on the models that are used so frequently in programming language compiler/scanner/parser construction you should get a book on the Models of Computation. I'd recommend Introduction to the Theory of Computation.
You also seem to show an interest in gaining an understanding of operating systems. This I would say is something that is separate from Programming Language Design, and should be pursued separately. The book Principles of Modern Operating Systems is a pretty good starting place for learning about that. You could start with small projects like creating a shell, or writing a programme that emulates the ls command, and then go into more low level things, depending on how through you are with the system calls in C.
I hope that helps you.
EDIT: I've learnt a lot since I write this answer. I was taking the online course on programming languages that Brown University was offering when I saw this answer featured there. The professor very rightly points out that this answer talks a lot about parsers but is light on just about everything else. I'd really suggest going through the course videos and exercises if you'd like to get a better idea on how to create a programming language.
It entirely depends on what your programming language is going to be like.
Do you definitely want it to be compiled? There are interpreted languages as well... or you could implement compilation at execution time
What do you want the target platform to be? Some options:
Native code (which architectures and operating systems?)
JVM
Regular .NET
.NET using the Dynamic Language Runtime (like IronRuby/IronPython)
Parrot
Personally I would strongly consider targeting the JVM or .NET, just because then you get a lot of "safety" for free, as well as a huge set of libraries your language can use. (Obviously with native code there are plenty of libraries too, but I suspect that getting the interoperability between them right may be trickier.)
I see no reason why you'd particularly want to write a compiler (or other part of the system) in C, especially if it's only for educational purposes (so you don't need a 100-million-lines-a-second compiler). What language are you personally most productive in?
Take a look at ANTLR. It is an awesome compiler-compiler the stuff you use to build a parser for a language.
Building a language is basically about defining a grammar and adding production rules to this grammar. Doing that by hand is not trivial, but a good compiler-compiler will help you a lot.
You might also want to have a look at the classic "Dragon Book" (a book about compilers that features a knight slaying a dragon on the front page). (Google it).
Building domain specific languages is a useful skill to master. Domain specific languages is typically not full featured programming language, but typically business rules formulated in a custom made language tailor made for the project. Have a look at that topic too.
There are various tutorials online such as Write Yourself a Scheme in 48 hrs.
One place to start tho' might be with an "embedded domain specific language" (EDSL). This is a language that actually runs within the environment of another, but you have created keywords, operators, etc particularly suited to the subject (domain) that you want to work in.

Why do you use less expressive languages, and should I also?

I'm a Python programmer who knows a bit of Ruby and PHP as well. I don't really know enough about Java to do anything meaningful, and I certainly don't know C, C++, or other low-level languages. I've heard all the "Who cares about speed because hardware is cheap, but coders are expensive" arguments, and I'm not trying to raise a debate here. I want to understand 2 things about the community of lower level programming languages (whether it's C or even assembly):
What is the main reason you choose to still use it (job requirements, speed, desktop vs web, etc)?
Is it still worth me taking the time to learn C++ (monetarily speaking) or others this late in the game, or will I benefit?
Also, consider the benefits / disadvantages of dynamic vs static typing, when choosing your reasons. I primarily program for the web, but don't take that fully into consideration because it's partly due to the fact that the web is all I know.
Unashamed Fortran programmer here. Speed speed speed. Oh, and all the scientists I work with are reasonably fluent in Fortran. But then I work in computational electromagnetics on large clusters and supercomputers and wouldn't recognise a web application if it leapt up and bit me on the nose.
I wouldn't regard Fortran as low-level or less-expressive, I think it's a DSL for linear-algebra and more general number-crunching. So why did I take the bait and respond to this question ?
Dynamic vs static typing ? Static please, everything tied down at compile time so the compiler can work its socks off optimising.
For a web programmer, C/C++ will offer you virtually no advantage. It is less expressive than Perl, Ruby, Python, etc. and requires more code and attention to detail of memory management. Unfortunately, choosing a language for its "features" is often second to choosing for its platform. C++ isn't as clean and elegant as C#, most of that comes from its C compatibility. Sadly, even though there are better languages for certain things, most aren't compiled and most aren't widely supported.
If you plan to develop a commercial product that the customer will download or receive on CD, then C/C++ offers you protection of your Intellectual Property (hard to reverse engineer), and a small runtime footprint, as well as ability to target older platforms like Windows XP.
It is not too late in the game to learn C/C++. C/C++ will be around as long as all the higher level languages are around, because those languages are implemented in C/C++. It is not as if we will all move to Python one day and C/C++ will be retired. High level, non-compiled languages are not self-hoisting, so they cannot exist without C++.
It is the tool to use if you are going to implement higher level things like languages, APIs, toolkits, drivers, IDEs, etc. But C++ is not the tool to use if you want the fastest way to develop an internal GUI app or a Web app.
Just learn the tool for the job. If the job changes, or you wish to change the job, then you may want to push yourself to learn C++ to see the other side of the Computer Science world, the side between the CPU and what you currently write your web apps in.
I'd think the big three reasons here are going to be Performance, Legacy System Support, and Embedded Development.
So mega subjective it renders any answer next to useless.
For the numerical calculations I do writing straight C++ is just faster than using e.g. straight Python. Sure, I can interface my C++ libraries to higher-level languages (and I do), but since most of my work is done on the low-level numerical side, I wouldn't gain too much.
Also consider that a lot of libraries especially in scientific computation are in FORTRAN, C or C++ and linking against them from C++ is much faster (especially if you just want to get done with it) that creating wrappers and interfaces all yourself.
If you would gain anything from learning a low-level language depends a lot on your problem domain.
There are several good reasons for lower-level languages.
First, there's a lot of applications where performance does matter. The main application I work on is slow enough as it is (it does a whole lot of things), and would be unusable in Python.
Second, there are applications that require standalone executables that aren't all that big.
Third, there's a tremendous amount of legacy code in C and C++, and it isn't going away any time soon.
Fourth, operating systems are normally written in C or C++ or similar languages, and expose APIs in them. If you need to get chummy with the OS, for whatever reason, you're better off using the OS language.
Dynamic typing is very clearly better for getting an application up and running fast, and my Lisp background pushed me to believe that static typing is normally just premature optimization. However, lots of people believe that static typing is much better for enforcing correctness in large projects, and C and C++ are well suited for large projects.
For your second question, I have no idea what you want to do in the future, so I don't know if it would be worthwhile for you to learn C++. For professional development, I'd strongly suggest learning a variety of languages, including C or a similar language. There are other questions on SO about what languages to learn.
Why aren't Python, Ruby and PHP written in itself?
Mission critical applications need best possible performance and algorithms and doesn't need things like metaprogramming.
C++ has some great ideas and libraries, which I then found in those modern languages, sometimes better sometimes worse (compare power of templates in C++ and generics in Java).
Low level languages will make you learn something more about lower level abstractions of the computer, operating systems or networks.
Static restricts you in programming and often makes you write out the types, however it also let's you better express your intent in a way, which is automatically provable by compiler and you can get support for tools. Dynamic gives you free hands, you can do more and maybe easier, however you have to test better.
I may be misunderstanding you here but it sounds like you're under the impression that C++ is the appropriate language for every kind of project. It isn't. You wouldn't use a Liebherr truck for a cross country road trip. Every popular language in existence got that way because it works well for some situations. There was a time when C++ was used to write web applications which quickly gave way to Perl scripts because the trade-off between production and performance. So in response to your first question, mainly people still use it because in certain situations it's the best tool for the job.
As far as whether or not you should learn it, I say if you have the time and the desire go for it. Even if C++ is never the correct tool for any of the projects you take on a decent understanding of concepts that are necessary in C++ will make you a better developer in every language.

Development time in various languages

Does anybody know of any research or benchmarks of how long it takes to develop the same application in a variety of languages? Really I'm looking for Java vs. C++ but any comparisons would be useful. I have the feeling there is a section in Code Complete about this but my copy is at work.
Edit:
There are a lot of interesting answers to this question but it seems like there is a lack of really good research. I have made a proposal over at meta about this problem.
Pratt & Whitney, purveyors of jet engines for civilian and military applications, did a study on this many years ago, without actually intending to do the study.
They went on the same metrics kick everyone else went on in the 1990s. They collected a bunch of data about their jet engine controller projects, including timecard data. They crunched it. The poor sap who got to crunch the data noticed something in the results: the military projects uniformly had twice the programmer productivity and one/fourth the defect density as the civilian projects.
This, by itself, is significant. It means you only need half as many programmers, and you aren't going to spend quite as much time fixing bugs. What is even more important is that this was an apples-to-apples comparison. A jet engine controller is a jet engine controller.
He then went looking for candidate explanations. All of the usual candidates: individual experience, team size, toolsets, software processes, requirements stability, everything, were trotted out, and they were ruled out when it was seen that the story on those items was uniformly the same on both sides of the aisle. At the end of the day, only one statistically significant difference showed up.
The civilian projects were written in every language you could think of. The military projects were all written in Ada.
IN EVERY SINGLE CASE, against every other comer, for jet engine controllers at Pratt & Whitney, using Ada gave double the productivity and one/fourth the defect density.
I know what the flying code monkeys are going to say. "You can do good work in any language." In theory, that's true. In practice, however, it appears that, at least at Pratt & Whitney, language made a difference.
Last I heard about this, Pratt & Whitney upper management decreed that ALL jet engine controller projects would be done in Ada.
No, I don't have a citation. No paper was ever written. My source on this story was the poor sap who crunched the numbers. Here's a similar study from 1995:
http://archive.adaic.com/intro/ada-vs-c/cada_art.html
This, incidentally, was BEFORE Boeing did the 777, and BEFORE the 777 brake subcontractor story happened. But that's another story.
One of the few funded scientific studies that I'm aware of on cross-language productivity, from the early 90s, funded by ARPA and the ONR,
Haskell vs. Ada Vs. C++ vs Awk vs ... An Experiment in Software Prototyping Productivity, Hudak & Jones, 1994.
We describe the results of an
experiment in which several
conventional programming languages,
together with the functional language
Haskell, were used to prototype a
Naval Surface Warfare Center (NSWC)
requirement for a Geometric Region
Server. The resulting programs and
development metrics were reviewed by a
committee chosen by the Navy. The
results indicate that the Haskell
prototype took significantly less time
to develop and was considerably more
concise and easier to understand than
the..
This article(a pdf) has some benchmarks (note that it's from 2000) between C, C++, Perl, Java, Perl, Python, Rexx and Tcl.
Some common wisdom I believe holds true (also somewhere within the article):
The number of lines written per hour is independent of the language
Opinion: more important is what is faster for a given developer, for example yourself. What you are used to, will usually be faster. If you are used to 20 years of C++ pitfalls and never skip an uninitialized variable, that will be faster than Java for anybody.
If you remember all parameters of CreateWindowEx() by heart, it will be faster than MFC or winforms.
A couple of anecdotal data points:
On Project Euler, which invites programming solutions to mathematical problems,
the shortest solutions are almost invariably written in J or K, a relative of APL; there are occasionally MatLab solutions in the same range. It can be argued, though, that these languages specialized in math.
runners up were Ruby solutions. A lot of algorithm can be wrapped in very little code, and it's much more legible than J / K.
Python and Haskell solutions also did very well, LOC-wise.
The question asked about "fastest development," not "shortest code." But it's conceivable that shorter solutions are faster to come up with - certainly for slow typists!
There's an annual competition among roboticists. Contestants are given some specs for some hardware, a practical problem to solve in software, and limited time to do so. Again very domain specific, of course. Programmers have their choice of tools, including language of course. Every year, the winning team (often a single person) used Forth.
This admittedly limited sample suggests that "development speed" and "effect of language on speed" is often very dependent on the problem domain.
See also
Are there statistical studies that indicates that Python is "more productive"?
for some discussions about this kind of question.
It would make more sense to benchmark the programmers, not the languages. The time to write a program in any mainstream language depends more on the ability of the programmer in that language than on qualities of that specific language.
I think most benchmarks and statements on this topic will mean very little.
Benchmarks can always be gamed; see the history of "Pet Store".
A language that's good at solving one kind of problem might not apply as well to another.
What matters most is the skill of your team, its knowledge of a particular technology, and how well you know the domain you're trying to solve.
UPDATE: Control software for jet engines and helicopters is a very specialized subset of computing problems. It's characterized by very rigorous, complete, detailed specs and QA that means the multi-million dollar aircraft cannot crash.
I can second the (very good) citation by John Strohm of Pratt & Whitney control software written in Ada. The control software for Kaman helicopters sold to Australia was also written in Ada.
But this does not lead to the conclusion that if you decided to write your next web site in Ada that you'd have higher productivity and fewer defects than you would if you chose C# or Java or Python or Ruby. All languages are not equally good in all problem domains.
Language/framework comparison for web applications
The Plat_Forms project provides some information of this type for web applications.
There are three studies with different tasks (done in 2007, 2011, 2012), all of the following format: Several teams of three professional developers implemented the same application under controlled conditions within two days.
It covers Java, Perl, PHP, and Ruby and has multiple teams for each language.
The evaluation reports much more than only development time.
Findings of iteration one for instance included
that experience with the language and framework appeared to be more relevant than what that framework was.
that Java tended to induce teams to make laborious constructions while Perl induced them to make pragmatic (and quite handy) constructions.
Findings of iteration two included
that Ruby on Rails was more productive in this type of project (which due to its duration was more rapid prototyping than full-blown development of a mature application)
and that the one exception to the above rule was the one team using Symfony, a PHP framework that has similar concepts to Ruby on Rails (but still the very different base language underneath it).
Look under http://www.plat-forms.org or search the web for "Plat_Forms".
There is plenty more detail in the reports, in particular the thick techreport on iteration 1.
Most programs have to interface with some other framework. It tends to be a good idea to pick the language that has libraries specifically for what you are trying to do. For instance are you trying to build a distributed redundant messaging system? If so I would use Erlang. Are you trying to make a quick and dirty data driven website, use Ruby and Rails. You get the idea. Real time DirectX where performance is key, C++/C/Asm.
If you are writing something that is algorithm based I would look to a functional language like Haskell, although it has a very high learning curve.
This question is a little old fashioned. Focusing on development time solely based on the choice of language is of limited value. There are so many other factors that have equal or more impact than the language itself:
The libraries or frameworks available / used.
The level of quality required (ie. defect count).
The type of application (eg. GUI, server, driver etc...)
The level of maintainability required.
Developer experience in the language.
The platform or OS the application is built on.
As an example, many would say Java is the better choice over C++ to build enterprise (line of business) applications. This is not normally because of the language itself, but instead it is perceived that Java has better (or more mature) web server and database frameworks available to it. This may or may not be true, but that is beside the point.
You may even find that the building an application using the same language on different operating systems or platforms gives greatly differing development time. For example using C++ on Linux to build a GUI application may take longer than a Windows based GUI application using C++ because of less extensive and mature GUI libraries avaialble on Linux (once again this is debatable).
According to Norvig, Lutz Prechelt published just such an article in the October 1999 CACM: "Comparing Java vs. C/C++ Efficiency Issues to Interpersonal Issues".
Norvig includes a link to that article. Unfortunately, the ACM, despite having a bitmap graphic proclaiming their goal of "Advancing Computing as a Science & Profession", couldn't figure out how to maintain stable links on their webpage, so it's just a 404 now. Perhaps your local library could help you out.
That Ada story might be an embellished version of this: http://www.adaic.com/whyada/ada-vs-c/cada_art.html
Erlang vs C++/Corba
"... As the Erlang DCC is less than a quarter of the size of a similar C++/CORBA implementation, the product development in Erlang should be fast, and the code maintainable. We conclude that Erlang and associated libraries are suitable for the rapid development of maintainable and highly reliable distributed products."
Paper here
There's a reason why there are no real comparisons in that aspect, except for anecdotal evidence (which can be found in favor of almost any language).
Actually writing code takes relatively small portion of developer's time. Even if language lets you cut coding time in half, it will be barely noticeable by the time project ends. Design, structure of program, development process are all much more important, and then there are libraries, tools and experience with them.
Some languages are better suited for certain development processes than the others, so if you've settled on design and process you can decide which language will be more efficient - but not before.
(didn't notice there's a similar answer already, so feel free to ignore this)

Why do you or do you not implement using polyglot solutions?

Polyglot, or multiple language, solutions allow you to apply languages to problems which they are best suited for. Yet, at least in my experience, software shops tend to want to apply a "super" language to all aspects of the problem they are trying to solve. Sticking with that language come "hell or high water" even if another language is available which solves the problem simply and naturally. Why do you or do you not implement using polyglot solutions?
I almost always advocate more than 1 language in a solution space (actually, more than 2 since SQL is part of so many projects). Even if the client likes a language with explicit typing and a large pool of talent, I advocate the use of scripting languages for administrative, testing, data scrubbing, etc.
The advantages of many-language boil down to "right tool for the job."
There are legitimate disadvantages, though:
Harder to have collective code ownership (not everyone is versed in all languages)
Integration problems (diminished in managed platforms)
Increased runtime overhead from infrastructure libraries (this is often significant)
Increased tooling costs (IDEs, analysis tools, etc.)
Cognitive "bumps" when switching from one to another. This is a double-edged sword: for those well-versed, different paradigms are complementary and when a problem arises in one there is often a "but in X I would solve this with Z!" and problems are solved rapidly. However, for those who don't quite grok the paradigms, there can be a real slow-down when trying to comprehend "What is this?"
I also think it should be said that if you're going to go with many languages, in my opinion you should go for languages with significantly different approaches. I don't think you gain much in terms of problem-solving by having, say, both C# and VB on a project. I think in addition to your mainstream language, you want to have a scripting language (high productivity for smaller and one-off tasks) and a language with a seriously different cognitive style (Haskell, Prolog, Lisp, etc.).
I've been lucky to work in small projects with the possibility to suggest a suitable language for my task. For example C as a low-level language, extending Lua for the high-level/prototyping has served very well, getting up to speed quickly on a new embedded platform. I'd always prefer two languages for any bigger project, one domain-specific fit to that particular project. It adds a lot of expressiveness for quickly trying out new features.
However probably this serves you best for agile development methods, whereas for a more traditional project the first hurdle to overcome would be choosing which language to use, when scripting languages tend to immediately seem "newcomers" with less marketing push or "seriousness" in their image.
The biggest issue with polyglot solutions is that the more languages involved, the harder it is to find programmers with the proper skill set. Particularly if any of the languages are even slightly esoteric, or hail from entirely different schools of design (e.g. - functional vs procedural vs object oriented). Yes, any good programmer should be able to learn what they need, but management often wants someone who can "hit the ground running", no matter how unrealistic that is.
Other reasons include code reuse, increased complexity interfacing between the different languages, and the inevitable turf wars over which language a particular bit of code should belong in.
All of that said, realize that many systems are polyglot by design -- anything using databases will have SQL in addition to some other language. And there's often scripting involved as well, either for actual code or for the build system.
Pretty much all of my professional programming experience has been in the above category. Generally there's a core language (C or C++), SQL of varying degrees, shell scripting, and possibly some perl or python code on the periphery.
My employer's attitude has always been to use what works.
This has meant that when we found some useful Perl modules (like the one that implements "Benford's Law", Statistics::Benford), I had to learn how to use ActiveState's PDK.
When we decided to add interval maths to our project, I had to learn Ada and how to use both GNAT and ObjectAda.
When a high-speed string library was requested, I had to relearn assembler and get used to MASM32 and WinAsm.
When we wanted to have a COM DLL of libiconv (based on Delphi Inspiration's code), I got reacquainted with Delphi.
When we wanted to use Dr. Bill Poser's libuninum, I had to relearn C, and how to use Visual C++ 6's IDE.
We still prototype things in VB6 and VBScript, because they're good at it.
Maybe sometime down the line I'll end up doing stuff in Forth, or Eiffel, or D, or, heaven help me, Haskell (I don't have anything against the language per se, it's just a very different paradigm.)
One issue that I've run into is that Visual Studio doesn't allow multiple languages to be mixed in a single project, forcing you to abstract things out into separate DLLs for each language, which isn't necessarily ideal.
I suspect the main reason, however, is the perception that switching back and forth between many different languages leads to programmer inefficiency. There is some truth to this, I switch constantly between JavaScript, C#, VBScript, and VB.NET and there is a bit of lost time as I switch from one language to another, as I mix my syntax a bit.
Still, there is definitely room for more "polyglot" solutions particularly that extend beyond using JavaScript and whatever back-end programming language.
Well, all the web is polyglot now with Java/PHP/Ruby in the back and JavaScript in the front...
Other examples that come to mind -- a flexible complex system written in a low level language (C or C++) with an embedded high level language (Python, Lua, Scheme) to provide customization and scripting interface. Microsoft Office and VBA, Blender and Python.
A project which can be done in a scripting language such as Python with performance critical or OS-dependent pieces done in C.
Both JVM and CLR are getting lots of new interesting scripting languages compatible. Java + Groovy, C# + IRonPython etc.

Resources