maybe this is just a little misunderstanding but how can a programming language be implemented?
I'm not talking about how to implement my own programming language but about the word "implemented"?
I mean, you can implement a compiler or an interpreter, but a programming language?
What does it mean if I read "C++ is implemented in C" or "Python was implemented in C"?
I think a language is more sth. like a protocol of how someone thinks about things should be implemented. For example, if he wants do display a messagebox he can say the command for this is ShowMessageBox(string) and implement a compiler who will translate this into something that works on a computer (aside from the selected programming paradigms he imagines).
I think this question leads to the question "what is a programming language in reality"? A compiler, an interpreter or just a documented language standard about how things should be implemented in a language?
[EDIT]
Answer: Languages are never implemented, only compilers/interpreters etc. It's this simple.
Here's a very academic answer (from a longtime academic).
First I'll reframe the question:
What does it mean for a programming language to be implemented?
I'll start with "what is a programming language":
A programming language is a formal language (a set of utterances we can characterize precisely through algorithmic rules) such that a sentence in the language has a computational meaning. There are a variety of ways to give computation meaning; two of the most popular are that a computation stands for a function (from values to values, or from machine states to machine states) and that a computation stands for a machine that makes "state transitions" and interacts with the outside world.
A language is implemented when a means is provided to read in an utterance and perform the computation, that is, calculate the function or perform the behavior. The means is the implementation.
Typical implementations include
Direct interpretation of the language syntax. This model is rare but FORTH probably comes closest to it.
Translation of the syntax into virtual-machine code, also called bytecode, which is itself another language and which is interpreted. It is popular to write bytecode interpreters in C. Lua, Perl, Python, and Ruby are implemented more or less this way.
Translation of the syntax into hardware machine instructions, which is itself another language, and which is interpreted by your CPU. C and C++ are typically (but not always) implemented this way.
Direct interpretation of the language in hardware. IA-32 machine code and AMD64 machine code are implemented this way.
When a person says "Language X is implemented in Y", they are usually saying that a translator for X or an interpreter for X's bytecode is written in language Y.
One of the great secrets of compiler writers is the ability to write the compiler for language X in language X itself. If this interests you, get Andrew Appel's paper Axiomatic Bootstrapping: A Guide for Compiler Hackers.
Sometimes the answer to this question is not obvious. Squeak Smalltalk writes both a translator and a bytecode interpreter in Smalltalk, then translates the interpreter to C, which is translated to machine code. What is Squeak implemented in? Smalltalk.
Poke a professor; get a lecture.
You are right, those statements don't make any sense. It's pretty obvious that whoever made those statements doesn't understand the difference between a programming language and a compiler (or interpreter).
This is a surprisingly common problem. For example, sometimes people talk about interpreted languages or compiled languages. That's the same thing: languages aren't interpreted or compiled, they just are. Interpretation and compilation are traits of the implementation not the language.
Another goodie: Python has a GIL. No, it doesn't: one implementation of Python has a GIL, all the other implementations don't, and the Python Language itself certainly doesn't. Or: Ruby has green threads. Again, not true: Ruby has threads. Period. Whether any particular language implementation chooses to implement them as green threads, native threads, platform threads or whatever, is a trait of that particular implementation, not of Ruby. And of course my favorite: Ruby 1.9 is faster than Ruby 1.8. This doesn't even make sense: Ruby 1.9 and Ruby 1.8 are programming languages, i.e. a bunch of abstract mathematical rules. You cannot run a programming language, therefore a programming language can never be "faster" or "slower" than another one.
The most blatant confusion about the difference between programming languages and implementations is the Computer Language Benchmark Game, which claims to benchmark languages against each other but in fact benchmarks implementations.
All of these are just different expressions of the fact that apparently some people seem to be fundamentally incapable of grasping the concept of abstraction. Or at least the concept of having an abstract language and a concrete implementation of that language.
If we go back to the statement that "Python is implemented in C", it should now be obvious that that statement is not just wrong. If the statement were wrong that would imply that the statement even makes sense, i.e. that there is some possible world out there, in which it could at least theoretically be right. But that's not the case. The statement is neither wrong nor right, it simply doesn't make sense. If English were a typed language, it would be a type error.
Python is a programming language. Programming languages aren't implemented in anything. They are just implemented. Compilers and interpreters are implemented in languages. But even if you interpret the statement this way, it isn't true: Jython is implemented in Java, IronPython is implemented in C#, PyPy is implemented in RPython and Python, Pynie is implemented in PGE, NQP and PIR. (Oh, and all of those implementations have compilers, so there goes your "Python is an interpreted language".) Similar with Ruby: Rubinius is implemented in Ruby and C++, JRuby and XRuby are implemented in Java, IronRuby and Ruby.NET are implemented in C#, HotRuby is implemented in ECMAScript, Red Sun is implemented in ActionScript, RubyGoLightly is implemented in Go, Cardinal is implemented in PGE, NQP and PIR, SmallRuby is implemented in Smalltalk/X, MagLev is implemented in GemStone Smalltalk and Ruby, YARI is implemented in Io. And for C++: Clang (which is the C, C++ and Objective-C front-end for LLVM) is implemented in C++ (all three front-ends are implemented in C++).
"C++ is implemented in C". I understand this as "C++ compiler is written in C language". Quite simple, without too much philosophy.
Generally, C++ compiler can be written in any language, including C++ itself (except of the first compiler version).
"Python was implemented in C" means that at least one Python compiler (in this case the most commonly used one) is written using C. The developers of that implementation of Python made a deliberate decision not to use C++. As a statement it is incomplete as Python has also been implemented in Java, in C# and in Python.
The main relevance is that it gives you some idea of the systems you might be able to port the language onto: anything targeted by a C compiler should (at least in theory) be capable of running the C implementation of Python, but if they'd chosen to use C++ there would be a smaller set of systems that could run it.
C++ usually isn't implemented in C these days: I believe it is usually implemented in C++. It is quite common for languages to be implemented in the same language (or a subset of the language) as it means you are no longer dependent on some other unrelated language being available for the target. To bootstrap onto a new system you cross compile from some other system.
If you compile gcc for a new platform the build process involves compiling the source code once with whatever compiler is already available (perhaps an older gcc), then compiling it a second time with the newly compiled compiler, then compiling it a third time with the output from the second compilation. If the second and third versions aren't identical you get a build error. If they are identical then you've got a pretty good indication that it compiled correctly.
A programming language is a standard. Its interpreter or compiler is an implementation of this standard.
To build a new language, you don't necessarily needs to do in in low level machine code (assembly for instance). So, using another language to accomplish your goal (creating a new language here) is perfectly normal. So, when we say: Python was implemented in C, it just means that C was used to create that language. For instance, C can be complied on many different architecture, so the programmers doesn't have to take care of the different type of computers (portable).
A language is just a way to express yourself to the computer. Today, it can be done in various ways. But when you use the same syntax as the language and create your own framework, it's called a library or framework. A programming language is just a notation for writing program. If the notation change, you have a different language. Like French or Spanish comes from Latin. (French is implemented in Latin ;)
Why is there so many different languages? Because the goal of a language is to solve complex problems. So, depending on what you want to try yo accomplish, choosing the appropriate language can be an important decision.
The statement "Language X is implemented in Language Y" makes sense and is true if and only if there exists a canonical implementation of Language X and that implementation is written in Language Y. In common usage, either the first or the most popular implementation is often assumed to be canonical.
For example, Perl is one of the few languages with a definitive canon. "Python is implemented in C" makes sense if CPython is taken to be the canonical implementation of Python, and "C++ is implemented in C" is true for CFront, the original implementation of "C with classes" by Bjarne Stroustrup.
The direct answer:
Implementation in the context you are talking about just means written and language actually means compiler.
The original C++ compiler was as I understand it written in C. There is nothing (apart from knowledge and time) to stop you from writing a C++ compiler in another language.
Implementation is the code that makes software work. Often we talk about the implementation of a function as in: "the function has not been implemented yet."
eg
void foo()
{
//function has not been implemented yet
throw();
}
This often happens during the design phase of a program because the call needs to be there in order to write/debug/concept test the calling code but we haven't got round to implementing (writing the code to go insde the function)
Related
I'm trying to find a word to categorize some languages, and don't quite have it.
Category 1 Example: A Ruby program must be interpreted; it is really instructions for the ruby program. Python and Javascript are like this, too.
Category 2 Example: A compiled C program needs no interpreter; it is really instructions for the computer. C++ , Rust and Go are like this, too.
I would say "compiled vs interpreted", but there are languages that compile to bytecode. Eg, Java compiles, but you can't run a Java program without having Java installed.
I would say "compiles to machine code", but I've read that C compilers may compile to assembler instead, and I'm fuzzy on that distinction anyway. Which raises the question "are we even talking about a feature of the language itself?" You can write a C interpreter, after all.
Informally, I'd say "you can use Rust or Go to do the kinds of things you'd use C for - to produce efficient binaries with no runtime dependencies."
Is there a word for "equivalent to C" in this sense?
No.
As you've already alluded to, there are various ways of classifying a language.
How it is executed (compiled vs interpreted)
How it exists at the lowest level (byte code vs machine code)
Similarity of the language syntax
Problem domain (scripting language, general purpose, functional programming)
Even the first of these falls down with C. It is perfectly possible to write a C interpreter - see Herbert Schildt's C : The complete reference for such an example.
I think that if you want really categorize languages you should pass through metaprogramming and programming paradigms (http://en.wikipedia.org/wiki/Metaprogramming) and metalanguage
because even if a language is mainly compiled and another is mainly interpreted, it's the compiler/intepreter which decides to pass through bytecodes/opcodes or to compile in machine language, to incrementally compile or to interactively compile and so on..
I hope to be helpful.
There are programming languages and theorem prover based on higher order logic (HOL). Examples include Twelf, lambda prolog, Isabelle. For example Twelf is is both a programming language and a theorem prover, while Isabelle is mainly a theorem prover, but for Isabelle code extraction is available.
I am looking for a HOL programming language based on haskell. The reason is that I like, for instance, lambda prolog very much, but it is not meant as a practical programming language. Lambda prolog lacks a standard library and interfacing with external libraries doesn't seem trivial. The problem is if you need some functionality, like writing a parser for a text file, you can't interface, say, with the many available existing libraries for haskell, and further, there is no standard library so you start from scratch.
Today I came across the Caledon programming language that was implemented as a master thesis, it seems. From the github page:
Caledon is a dependently typed, polymorphic, higher order logic
programming language.
This is interesting, since it is written in haskell so it should be easy to extend and interface with existing haskell libraries. But it seems that the project is in a bit early stage, I am not sure if input-output (IO) is implemented. Since I learned only today about Caledon, I think I might have missed some further projects. (BTW, I am not interested in standard logic programming languages like prolog).
Are there programming languages based on higher order logic besides Caledon that are implemented in haskell?
(I am asking for "implemented in haskell", as it is rather easy to connect programming languages that can be extracted to or are implemented in haskell. For example the Agda programming language can compile to haskell code and haskell libraries can be used conveniently and is extremly easy to use haskell libraries if you know how. Many other programming languages (e.g., ATS) I belive only provide the smallest common denominator which is a C based foreign function interface (FFI). In my eyes it is quite cumbersome to connect two higher programming languages via their respective C-based FFI interface. Thus the seemly abitrary part that "it should be implemented in haskell". Further, as a side note some users have downvoted in the past for my description of Agda as a programming language, but of course this is not true, i.e., consider Curry-Howard )
"Haskabelle is a converter from Haskell source files to Isabelle/HOL theories implemented in Haskell itself."
Haskabelle
Strange Statement: Haskell' is a higher order logic programming language based on Haskell. Type inference in Haskell with multiparameter type classes, type families, undecidable inference and whatnot actually forms a higher order logic programming language. This probably doesn't help you very much because:
The spec is literally constantly changing (I've had a few packages loose compatibility as they were based on hacks that got "fixed")
The type system itself doesn't have IO (yet?)
It can't really call other Haskell libraries from type inference
Its not very fast.
The logic programming semantics aren't exactly clear or stable.
It doesn't permit you to unify with lambdas or other type classes, although it does permit unification with functions.
Sadly, I know of extraordinarily few full HOL languages let alone ones implemented in Haskell - it turns out higher order unification is a huge pain to implement.
short answer: i don't know. long answer: you have small chances you will find purely academic language with thousands of libraries and tools for it. if you for some reason need that specific language for some specific problem then use it ONLY for that problem. not for parsing files, calculating taxes or launching rockets. create library and link it with other programs. or even better: create a microservice or connect the programs in other way (e.g. standard input/output) that doesn't require much effort. always use best tool for the job
I'm sketching a design of something (machine learning of functions) that will preferably want a functional programming language, and also introspection, specifically the ability to examine the program's own code in some nicely tractable format, and preferably also the ability to get machine generated code compiled at runtime, and I'm wondering what's the best language to write it in. Lisp of course has strong introspection capabilities, but the statically typed languages also have advantages; the ones I'm considering are:
F# - the .Net platform has a good story here, you can read byte code at run time and also emit byte code and get it compiled; I assume there's no problem accessing these facilities from F#.
Haskell, Ocaml - do these have similar facilities, either via byte code or parse tree?
Are there other languages I should also be looking at?
Haskell's introspection mechanism is Template Haskell, which supports compile time metaprogramming, and when combined with e.g. llvm, provides runtime metaprogramming facilities.
Ocaml has:
Camlp4 to manipulate Ocaml concrete syntax trees in Ocaml. The maintained implementation of Camlp4 is Camlp5.
MetaOCaml for full-scale multi-stage programming.
Ocamljit to generate native code at run time, but I don't think it's been maintained recently.
Ocaml-Java to compile Ocaml code for the Java virtual machine. I don't know if there are nice reflection capabilities.
Not really an answer, but note also the F# Quotations feature and library, for more homoiconicity stuff.
You might check out the typed variant of Racket (previously known as PLT Scheme). It retains most of the syntactic simplicity of Scheme, but provides a static type system. Since Racket is a Scheme, metaprogramming is par for the course, and the runtime can emit native code by way of a JIT.
The Haskell approach would be more along the lines of parsing the source. The Haskell Platform includes a complete source parser, or you can use the GHC API to get access that way.
I'd also look at Scala or Clojure which come with them all the libraries that have been developed for Java. You'll never need to worry if a library does not exist. But more to the point of your question, these languages give you the same reflection (or more powerful types) that you will find within Java.
I'm sketching a design of something (machine learning of functions) that will preferably want a functional programming language, and also introspection, specifically the ability to examine the program's own code in some nicely tractable format, and preferably also the ability to get machine generated code compiled at runtime, and I'm wondering what's the best language to write it in. Lisp of course has strong introspection capabilities, but the statically typed languages also have advantages; the ones I'm considering are:
Can you not just parse the source code like an ordinary interpreter or compiler? Why do you need introspection?
F# - the .Net platform has a good story here, you can read byte code at run time and also emit byte code and get it compiled; I assume there's no problem accessing these facilities from F#.
F# has a rudimentary quotation mechanism but you can only quote some expressions and not other kinds of code, most notably type definitions. Also, its evaluation mechanism is orders of magnitude slower than genuine compilation so it is basically completely useless. You can use reflection to analyze type definitions but, again, it is quite rudimentary.
You can read byte code but that has been compiled so a lot of information and structure has been lost.
F# also has lexing and parsing technology (most notably fslex, fsyacc and FParsec) but it is not as mature as OCaml's.
Haskell, Ocaml - do these have similar facilities, either via byte code or parse tree?
Haskell has Template Haskell but I've never heard of anyone using it (abandonware?).
OCaml has its Camlp4 macro system and a few people do use it but it is poorly documented.
As for lexing and parsing, Haskell has a few libraries (most notably Parsec) and OCaml has many libraries.
Are there other languages I should also be looking at?
Term rewrite languages like Mathematica would be an obvious choice because they make it trivial to manipulate code. The Pure language might be of interest.
You might also consider MetaOCaml for its run-time compilation capabilities.
At the risk of sounding naive, I ask this question in search of a deeper understanding of the concept of programming languages in general. I write this question for my own edification and the edification of others.
What is a useful definition of a computer programming language and what are its basic and necessary components? What are the key features that differentiate languages (functional, imperative, declarative, object oriented, scripting, etc...)?
One way to think about this question. Imagine you are looking at the hardware of a modern desktop or laptop computer. Assume, that the C language or any of its variants do not exist. How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Tangentially related, what is it about computer languages that allow other languages to exist? For example take a scripting language like Javascript, Perl, or PHP. I assume part of the definition of these is that there is an interpreter most likely implemented in C or C++ at some level. Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
I would be satisfied with a list of concepts that can be looked up or researched further.
Like any language, programming languages are simply a communication tool for expressing and conveying ideas. In this case, we're translating our ideas of how software should work into a structured and methodical form that computers (as well as other humans who know the language, in most cases) can read and understand.
What is a useful definition of a computer programming language and what are its basic and necessary components?
I would say the defining characteristic of a programming language is as follows: things written in that language are intended to eventually be transformed into something that is executed. Thus, pseudocode, while perhaps having the structure and rigor of a programming language, is not actually a programming language. Likewise, UML can express many powerful ideas in an abstract manner just like a programming language can, but it falls short because people don't generally write UML to be executed.
How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Even if the word "programming language" wasn't part of the shared vocabulary of the group I was talking to, I think it would be obvious to the others that we'd need a way to communicate with the computer. Just as no one expects a car to drive itself (yet!) without external instructions in the form of interaction with the steering wheel and pedals, no one could expect the hardware to function without being told what to do. As noted above, a programming language is the conduit through which we can make that communication happen.
Tangentially related, what is it about computer languages that allow other languages to exist?
All useful programming languages have a property called Turing completeness. If one language in the Turing-complete set can do something, then any of them can; they are said to be computationally equivalent.
However, just because they're equally "powerful" doesn't mean they're equally nice to work with for humans. This is why many people are willing to sacrifice the unparalleled micromanagement you get from writing assembly code in exchange for the expressiveness and power you get with higher-level languages, like Ruby, Python, or C#.
Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
Since there is a Javascript interpreter written in C, it follows that it must be possible to write a Javascript interpreter in Javascript, since both are Turing-complete. However, again, note that Turing-completeness says nothing about how hard it is to do something in one language versus another -- only whether it is possible to begin with. Your Javascript-interpreter-inside-Javascript might well be horrendously inefficient, consume absurd amounts of memory, require enormous processing power, and be a hideously ugly hack. But Turing-completeness guarantees it can be done!
While this doesn't directly answer your question, I am reminded of the Revenge of the Nerds essay by Paul Graham about the evolution of programming languages. It's certainly an interesting place to start your investigation.
Not a definition, but I think there are essentially two strands of development in programming languages:
Those working their way up from what the machine can do to something more expressive and less tied to the machine (Assembly, Fortran, C, C++, Java, ...)
Those going down from some mathematical or theoretical computer science concept of computation to something implementable on a real machine (Lisp, Prolog, ML, Haskell, ...)
Of course, in reality the picture is not as neat, and both strands influence each other by borrowing the best ideas.
Slightly long rant ahead.
A computer language is actually not all that different from a human language. Both are used to express ideas and concepts in commonly understood terms. Among different human languages there are syntactic differences, but you can express the same thing in every language (does that make human languages Turing complete? :)). Some languages are better suited for expressing certain things than others.
For example, although technically not completely correct, the Inuit language seems quite suited to describe various kinds of snow. Japanese in my experience is very suitable for expressing ones feelings and state of mind thanks to a large, concise vocabulary in that area. German is pretty good for being very precise thanks to largely unambiguous grammar.
Different programming languages have different specialities as well, but they mostly differ in the level of detail required to express things. The big difference between human and programming languages is mostly that programming languages lack a lot of vocabulary and have very few "grammatical" rules. With libraries you can extend the vocabulary of a language though.
For example:
Make me coffee.
Very easy to understand for a human, but only because we know what each of the words mean.
coffee : a drink made from the roasted and ground beanlike seeds of a tropical shrub
drink : a liquid that can be swallowed
swallow : cause or allow to pass down the throat
... and so on and so on
We know all these definitions by heart, but we had to learn them at some point.
In the same way, a computer can be "taught" to "understand" words as well.
Coffee::make()->giveTo($me);
This could be a perfectly valid expression in a computer language. If the computer "knows" what Coffee, make() and giveTo() means and if $me is defined. It expresses the same idea as the English sentence, just with a different, more rigorous syntax.
In a different environment you'd have to say slightly different things to get the same outcome. In Japanese for example you'd probably say something like:
コーヒーを作ってもらっても良いですか?
Kōhī o tsukuttemoratte mo ii desu ka?
Which would roughly translate to:
if ($Person->isAgreeable('Coffee::make()')) {
return $Person->return(Coffee::make());
}
Same idea, same outcome, but the $me is implied and if you don't check for isAgreeable first you may get a runtime error. In computer terms that would be somewhat analogous to Ruby's implied behaviour of returning the result of the last expression ("grammatical feature") and checking for available memory first (environmental necessity).
If you're talking to a really slow person with little vocabulary, you probably have to explain things in a lot more detail:
Go to the kitchen.
Take a pot.
Fill the pot with water.
...
Just like Assembler. :o)
Anyway, the point being, a programming language is actually a language just like a human language. Their syntax is different and specialized for the problem domain (logic/math) and the "listener" (computers), but they're just ways to transport ideas and concepts.
EDIT:
Another point about "optimization for the listener" is that programming languages try to eliminate ambiguity. The "make me coffee" example could, technically, be understood as "turn me into coffee". A human can tell what's meant intuitively, a computer can't. Hence in programming languages everything usually has one and one meaning only. Where it doesn't you can run into problems, the "+" operator in Javascript being a common example.
1 + 1 -> 2
'1' + '1' -> '11'
See "Programming Considered as a Human Activity." EWD 117.
http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
Also See http://www.csee.umbc.edu/331/current/notes/01/01introduction.pdf
Human expression which:
describes mathematical functions
makes the computer turn switches on and off
This question is very broad. My favorite definition is that a programming language is a means of expressing computations
Precisely
At a high level
In ways we can reason about them
By computation I mean what Turing and Church meant: the Turing machine and the lambda calculus have equivalent expressive power (which is a theorem), and the Church-Turing hypothesis (which is a conjecture) says roughly that there's no more powerful notion of computation out there. In other words, the kinds of computations that can be expressed in any programming languages are at best the kinds that can be expressed using Turing machines or lambda-calculus programs—and some languages will be able to express only a subset of those calculations.
This definition of computation also encompasses your friendly neighborhood hardware, which is pretty easy to simulate using a Turing machine and even easier to simulate using the lambda calculus.
Expressing computations precisely means the computer can't wiggle out of its obligations: if we have a particular computation in mind, we can use a programming language to force the computer to perform that computation. (Languages with "implementation defined" or "undefined" constructs make this task more difficult. Programmers using these languages are often willing to settle for—or may be unknowingly settling for—some computation that is only closely related to the computation they had in mind.)
Expressing computation at a high level is what programming langauges are all about. An important reason that there are so many different programming languages out there is that there are so many different high-level ways of thinking about problems. Often, if you have an important new class of problems to solve, you may be best off creating a new programming language. For example, Larry Wall's writing suggests that solving a class of problems called "systems administration" was a motivation for him to create Perl.
(Another reason there are so many different programming languages out there is that creating a new language is a lot of fun, and anyone can learn to do it.)
Finally, many programmers want languages that make it easy to reason about programs. For example, today a student of mine implemented a new algorithm that made his program run over six times faster. He had to reason very carefully about the contents of C arrays to make sure that the new algorithm would do the same job the old one did. Luckily C has decent tools for reasoning about programs, for example:
A change in a[i] cannot affect the value of a[i-1].
My student also applied a reasoning principle that isn't valid in C:
The sum of of a sequence unsigned integers will be at least as large as any integer in the sequence.
This isn't true in C because the sum might overflow. One reason some programmers prefer languages like Standard ML is that in SML, this reasoning principle is always valid. Of languages in wide use, probably Haskell has the strongest reasoning principles Richard Bird has developed equational reasoning about programs to a high art.
I will not attempt to address all the tangential details that follow your opening question. But I hope you will get something out of an answer that aims to give a deeper understanding, as you asked, of a fundamental question about programming languages.
One thing a lot of "IT" types forget is that there are 2 types of computer programming languages:
Software programming languages: C, Java, Perl, COBAL, etc.
Hardware programming languages: VHDL, Verilog, System Verilog, etc.
Interesting.
I'd say the defining feature of a programming language is the ability to make decisions based on input. Effectively, if and goto. Everything else is lots and lots of syntactic sugar. This is the idea that spawned Brainfuck, which is actually remarkably fun to (try to) use.
There are places where the line blurs; for example, I doubt people would consider XSLT to really be a programming language, but it's Turing-complete. I've even solved a Project Euler problem with it. (Very, very slowly.)
Three main properties of languages come to mind:
How is it run? Is it compiled to bare metal (C), compiled to mostly bare metal with some runtime lookup (C++), run on a JIT virtual machine (Java, .NET), bytecode-interpreted (Perl), or purely interpreted (uhh..)? This doesn't comment much on the language itself, but speaks to how portable the code may be, what sort of speed I might expect (and thus what broad classes of tasks would work well), and sometimes how flexible the language is.
What paradigms does it support? Procedural? Functional? Is the standard library built with classes or functions? Is there reflection? Is there, ideally, support for pretty much whatever I want to do?
How can I represent my data? Are there arrays, and are they fixed-size or not? How easy is it to use strings? Are there structs or hashes built in? What's the type system like? Are there objects? Are they class-based or prototype-based? Is everything an object, or are there primitives? Can I inherit from built-in objects?
I realize the last one is a very large collection of potential questions, but it's all related in my mind.
I imagine rebuilding the programming language landscape entirely from scratch would work pretty much how it did the first time: iteratively. Start with assembly, the list of direct commands the processor understands, and wrap it with something a bit easier to use. Repeat until you're happy.
Yes, you can write a Javascript interpreter in Javascript, or a Python interpreter in Python (see: PyPy), or a Python interpreter in Javascript. Such languages are called self-hosting. Have a look at Perl 6; this has been a goal for its main implementation from the start.
Ultimately, everything just has to translate to machine code, not necessarily C. You can write D or Fortran or Haskell or Lisp if you want. C just happens to be an old standard. And if you write a compiler for language Foo that can ultimately spit out machine code, by whatever means, then you can rewrite that compiler in Foo and skip the middleman. Of course, if your language is purely interpreted, this will probably result in a stack overflow...
As a friend taught me about computer languages, a language is a world. A world of communication with that machine. It is world for implementing ideas, algorithms, functionality, as Alonzo and Alan described. It is the technical equivalent of the mathematical structures that the aforementioned scientists built. It is a language with epxressions and also limits. However, as Ludwig Wittgenstein said "The limits of my language mean the limits of my world", there are always limitations and that's how one chooses it's language that fits better his needs.
It is a generic answer... some thoughts actually and less an answer.
There are many definitions to this but what I prefer is:
Computer programming is programming that helps to solve a particular technical task/problem.
There are 3 key phrases to look out for:
You: Computer will do what you (Programmer) told it to do.
Instruct: Instruction is given to the computer in a language that it can understand. We will discuss that below.
Problem: At the end of the day computers are tools (Complex). They are there to make out life simpler.
The answer can be lengthy but you can find more about computer programming
Wikipedia says:
A programming language is a machine-readable artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that specify the behavior of a machine, to express algorithms precisely, or as a mode of human communication.
But is this true? It occurred to me in the shower this morning that a programming language might just be a set of conventions, something that both a human and an appropriately arranged compiler can interpret. If that's the case, then isn't it this definition of a programming language misleading? If that isn't the case, then what's the difference between a compiler and the language it compiles?
Thanks!
z.
A programming language is exactly that set of conventions, but I don't see why that makes the Wikipedia entry misleading, really. If it makes you feel better, you might edit it to read something like:
A programming language is a machine-readable artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to define programs that specify the behavior of a machine, to express algorithms precisely, or as a mode of human communication.
I understand what you are saying, and you are right. Describing a programming language as a "machine-readable artificial language designed to express computations that can be performed by a machine" is unnecessarily specific. Programming languages can be more broadly generalized as established descriptions of tasks (or "a set of conventions") that allow one entity to control the behavior of another. What we traditionally identify as programming languages are just a layer of abstraction between machine code and programmers, and are specifically designed for electronic computers.
Programming languages are not limited to traditional computers (see the K'NEX Computer), and aren't even necessarily limited to computational devices at all. For example, when I am pleased with my dog's behavior, he gets a treat. When I am displeased, he gets nothing. Over time the dog learns the treat/no treat programming and I can use the treats to control his behavior (to an extent).
I don't see what is different between what you are asking...
It occurred to me in the shower this morning that a programming language might just be a set of conventions, something that both a human and an appropriately arranged compiler can interpret.
... and the Wikipedia definition.
The key is that a programming language is just "a machine-readable artificial language".
A compiler does indeed act as an effective specification of a language in terms of a reduction to machine code - however, as it's generally difficult to understand a language by reading the compiler's source, one generally considers a programming language in terms of an abstract processing model that the compiler implements. This abstract model is what one means when one refers to the programming language.
That said, there are indeed many languages (Hi there, PHP!) in which the compiler is the only specification of the language in existence. These languages tend to change unpredictably at times as compiler bugs are fixed or introduced.
Programming languages are an abstraction layer that helps insulate the programmer from having to talk in electrical signals to the computer. The creators of the language have done all the hard work in creating a structure (language) or standard (grammar, conjugation, etc.) that then can be interpreted by a compiler in terms that the computer understands.
All programming languages are really nothing more than domain specific languages for machine code or manipulating the registers and memory of a processing entity.
This is probably the true explanation of what a programming language really is:
Step 1: Think of a language and its grammar, which is a set of rules for making syntactically valid statements using the language. For example, a language called GRID has tiles {0,1} as its alphabet and grammar rules that make sure every GRID statement has equal length and height.
Step 2 (definition of program): GRID, so far, is useless. I'd dare to think of any valid statement of GRID as just data. We need to add something else to GRID: a successor function. So GRID={Grammar, alphabet, successor function}. To make this clear, lets use the rules of "The Game of Life" as successor function.
Step 3: The Game of Life is actually Turing Complete, so GRID={Grammar, alphabet, successor function = GOL} can perform any computation that is computable.
A programming language is nothing but a language with a successor function. The environment that evaluates a valid statement of the language(program) does nothing but follow those successor functions. Variables, for example, are things whose successor functions = (STAY THE SAME)
Computers are just very fast environments ;)
Wikipedia's definition might have been taken out of context. For one thing, only programs written in machine code are machine-readable. Otherwise, you need a compiler to convert C++, Java or even assembly code to machine code so the computer can carry out your instructions. Unless you include comments that are only readable to humans, or unless you are strictly discussing a topic within the realm of your program, programming is insufficient for human communication.