About learning semantics - programming-languages

I have to do something related with semantics. and I am reading the Semantics Engineering with PLT Redex, I find it a little hard to understand, it is not the way I thought about doing computer science. I remembered how much I am excited about reading CSAPP book. but not ver excited about this one, or the field of semantics! While I think the reason is that I am not yet understanding it, what's the point of it. Maybe. But I think I need some suggestions to lead me out of the way: how to precede the learning of semantics?

What is semantics: Semantics is usually contrasted with syntax. Syntax describes how pieces of a language may be arranged. Semantics describes what those arrangements do. It describes what effects they have.
From the perspective of one who writes a language, syntax is the specification for the lexer, parser and Abstract Syntax Tree. Semantics is the specification for the Eval / Apply loop.
Why is semantics interesting: Syntax is a solved problem. While it is pleasingly complex, once you have written a few good parsers they all start to look the same. The process of giving a language meaning and having that meaning lead to a useful, concise, and clear tool is a much deeper subject. It is just an opinion, but it common among academics in the Computer Science field to say that semantics make the language.
Some Semantic Concepts: Object oriented programing is a semantic notation. SmallTalk and Java have very different syntax from each other, but share the semantics of "object" as encapsulated data operated on by a defined set of methods. Functional programing is another semantic idea.
I am not an academic, and I have not kept up with the recent pedagogy of semantics, so I cannot address well what is now being taught, but as a coder and one interested in Computer Science, I find the topic both facinating and highly applicable.

Related

machine representation of natural text

I'm currently working on high-level machine representation of natural text.
For example,
"I had one dog but I gave it to Danny who didn't have any"
would be
I.have.dog =1
I.have.dog -=1
Danny.have.dog = 0
Danny.have.dog +=1
something like this....
I'm trying to find resources, but can't really find matching topics..
Is there a valid subject name for this type of research? Any library of resources?
Natural logic sounds like something related but it's not really the same thing I'm working on. Please help me out!
Representing natural language's meaning is the domain of computational semantics. Within that area, lots of frameworks have been developed, though the basic one is still first-order logic.
Specifically, your problem seems to be that of recognizing discourse semantics, which deals with information change brought about by language use. This is pretty much an open area of research, so expect to find a lot of research papers and PhD positions, but little readily-usable software.
As larsmans already said, this is pretty much a really open field of research, called computational semantics (a subfield of computational linguistics.)
There's one important thing that you'll need to understand before starting off in the comp-sem world: most people there use fancy high-level languages. By high-level I don't mean C, but more something like LISP, Prolog, or, as of late, Haskell. Computational semantics is very close to logic, which is why people researching the topic are more comfortable with functional and logical languages — they're closer to what they actually use all day long.
It will also be very useful for you to first look at some foundational course in predicate logic, since that's what the underlying literature usually takes for granted.
A good introduction to the connection between logic and language is L.T.F. Gamut — Logic, Language, and Meaning, volume I. This deals with the linguistic side of semantics, which won't help you implement anything, but it will help you understand the following literature. That said, there are at least some books that will explain predicate logic as they go, but if you ask me, any person really interested in the representation of language as a formal system should take a course in predicate and possibly intuitionist and intensional logic.
To give you a bit of a peek, your example is rather difficult to treat for
current comp-sem approaches. Not impossible, but already pretty high up the
scale of difficulty. What makes it difficult is the tense for one part (dealing
with tense and aspect will typically bring you into even semantics,) but also
that you'd have to define the give and have relations in a way that
works for this example. (An easier example to work with would be, say "I had
a dog, but I gave it to Danny who didn't have any." Can you see why?)
Let's translate "I have a dog."
∃x[dog(x) ∧ have(I,x)]
(There is an object x, such that x is a dog and the have-relation holds between
"I" and x.)
These sentences would then be evaluated against a model, where the "I"
constant might already be defined. By evaluating multiple sentences in sequence,
you could then alter that model so that it keeps track of a conversation.
Let's give you some suggestions to start you off.
The classic comp-sem system is
SHRDLU, which places geometric
figures of certain color in a virtual environment. You can play around with it, since there's a Windows-compatible demo online at that page I linked you to.
The best modern book on the topic is probably Blackburn and Bos
(2005). It's written in Prolog, but
there are sources linked on the page to learn Prolog
(now!)
Van Eijck and Unger give a good course on computational semantics in Haskell, which is a bit more recent, but in my eyes not quite as educational in terms of raw computational semantics as Blackburn and Bos.

A naive questioin about UML and Turing completeness

It'a a well-known fact that UML does not Turing complete (in contrast to usual programming languages). But it seems to me UML is even more flexible than traditional languages. I can't imagine a problem you can describe by means of such language as C++ (f.e) but at the same time can't describe by means of UML. Quite the contrary it's much more easier for me to fancy a construction existing in UML but unreliazable in C++ (Java, Delphi, VB and so on...)
Could you help me to understand this moment? I really can't catch it.
I´d say that UML IS a turing complete language since the addition of the Action Semantics package (this happened in UML 1.5 version).
Now UML includes an imperative action language (not to be confused with OCL) that allows a precise definition of the behaviour of class methods. This imperative action language includes the typical set of assignments, if conditions, iterators,... you´d expect from any programming language.
This action language is one of the popular components of Executable UML approaches but it´s now part of the UML standard itself
Interesting question. A couple of points come to mind, although there's probably a whole more to it. Apologies it's quite long.
What can you describe with e.g. C++ that you can't describe with UML?
First, you have to define what you mean by "UML". Generally, people tend to mean the 'core' elements - those on Class Diagrams, State Diagrams, Activity Diagrams etc. - plus OCL (the constraint language).
Given those elements you can't specify imperative algorithms. Specifically, anything that requires assignment. You can however get very close: the steps and decision logic can be expressed using e.g. Activity Diagrams, and the function of each step defined as pre- and post-conditions in OCL. However, you never quite get to fully specifying the behaviour. Take an example of an atomic step whose purpose is to increment the value of an integer. The input is an integer - say X. The output is described by the post-condition X == X#pre+1. However, there's nothing in UML to actually implement the step.
Now it's entirely conceivable to extend usage of UML to address above. The UML Action Semantics were developed precisely to enable specification of actions. Doing so makes the language computationally complete. The problems are merely practical:
There's no universally agreed and adopted syntax for the semantics;
There are very few implementations
What can you describe with UML that can't be implemented in e.g. C++?
In essence nothing. However there are two practical limitations:
UML "specifications" are usually imprecise, ambiguous and/or incomplete. Activity Diagrams, for example, often leave paths dangling. Could it be represented directly in C++? Yes. Would it compile? No.
Some of the mappings for UML constructs to imperative, stack based languages are non-trivial. State Models are an example: while there are well-known patterns, the mapping is quite complex. This is especially true for hierarchical and/or concurrent behaviour. In an activity Diagram, it's easy to express that two activities happen in parallel and then synchronise before moving to the next step. That can of course be done in C++ but requires the use of e.g. threading libraries.
It can however be done. In fact, it's what the Executable UML tools do: Model Compilers take an executable UML model and translate it into 100% functioning imperative code.
hth.
As the name implies UML is a modelling language. It can sometimes be applied as a methodology for designing software.
Once upon a time they were dreaming up ways of automatic code generation, they were called CASE tools. They failed to get the code generators to work effectively, although they did remove a lot of boiler plate code from the language. This augmentation became the key to UML as it provided a way to augment the experience of designing and programming software.
I don't know if UML is "Turing Complete", I hope it is, wouldn't it be great to come up with solutions by describing the problem to the computer in pictorial format and letting the computer do all that hard nasty programming for you.
UML is the meta language to the doing in the code. It describes artefacts, how they relate/interact and what they do.
UML is being added to, new design artefacts are being added year by year, and if it is not already Turing Complete I don't see why it couldn't be.
However I think somewhere along the line I read something about languages being "Turing Equivalent" if they could both express and solve the same solution.
Since UML is the design language and code is the implementation language based on the UML design I would say that UML and code (c#, java, etc) are Turing Equivalent. If they are agreed to be Turing Equivalent then UML must be Turing Complete.

What language to learn after Haskell? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As my first programming language, I decided to learn Haskell. I'm an analytic philosophy major, and Haskell allowed me to quickly and correctly create programs of interest, for instance, transducers for natural language parsing, theorem provers, and interpreters. Although I've only been programming for two and a half months, I found Haskell's semantics and syntax much easier to learn than more traditional imperative languages, and feel comfortable (now) with the majority of its constructs.
Programming in Haskell is like sorcery, however, and I would like to broaden my knowledge of programming. I would like to choose a new programming language to learn, but I do not have enough time to pick up an arbitrary language, drop it, and repeat. So I thought I would pose the question here, along with several stipulations about the type of language I am looking for. Some are subjective, some are intended to ease the transition from Haskell.
Strong type system. One of my favorite parts of programming in Haskell is writing type declarations. This helps structure my thoughts about individual functions and their relationship to the program as a whole. It also makes informally reasoning about the correctness of my program easier. I'm concerned with correctness, not efficiency.
Emphasis on recursion rather than iteration. I use iterative constructs in Haskell, but implement them recursively. However, it is much easier to understand the structure of a recursive function than a complicated iterative procedure, especially when using combinators and higher-order functions like maps, folds and bind.
Rewarding to learn. Haskell is a rewarding language to work in. It's a little like reading Kant. My experience several years ago with C, however, was not. I'm not looking for C. The language should enforce a conceptually interesting paradigm, which in my entirely subjective opinion, the C-likes do not.
Weighing the answers: These are just notes, of course. I'd just like to reply to everyone who gave well-formed responses. You have been very helpful.
1) Several responses indicated that a strong, statically typed language emphasizing recursion means another functional language. While I want to continue working strongly with Haskell, camccann and larsmans correctly pointed out that another such language would "ease the transition too much." These comments have been very helpful, because I am not looking to write Haskell in Caml! Of the proof assistants, Coq and Agda both look interesting. In particular, Coq would provide a solid introduction to constructive logic and formal type theory. I've spent a little time with first-order predicate and modal logic (Mendellsohn, Enderton, some of Hinman), so I would probably have a lot of fun with Coq.
2) Others heavily favored Lisp (Common Lisp, Scheme and Clojure). From what I gather, both Common Lisp and Scheme have excellent introductory material (On Lisp and The Reasoned Schemer, SICP). The material in SICP causes me to lean towards Scheme. In particular, Scheme through SICP would cover a different evaluation strategy, the implementation of laziness, and a chance to focus on topics like continuations, interpreters, symbolic computation, and so on. Finally, as others have pointed out, Lisp's treatment of code/data would be entirely new. Hence, I am leaning heavily towards option (2), a Lisp.
3) Third, Prolog. Prolog has a wealth of interesting material, and its primary domain is exactly the one I'm interested in. It has a simple syntax and is easy to read. I can't comment more at the moment, but after reading an overview of Prolog and skimming some introductory material, it ranks with (2). And it seems like Prolog's backtracking is always being hacked into Haskell!
4) Of the mainstream languages, Python looks the most interesting. Tim Yates makes the languages sound very appealing. Apparently, Python is often taught to first-year CS majors; so it's either conceptually rich or easy to learn. I'd have to do more research.
Thank you all for your recommendations! It looks like a Lisp (Scheme, Clojure), Prolog, or a proof assistant like Coq or Agda are the main langauages being recommended.
I would like to broaden my knowledge of programming. (...) I thought I would pose the question here, along with several stipulations about the type of language I am looking for. Some are subjective, some are intended to ease the transition from Haskell.
Strong type system. (...) It also makes informally reasoning about the correctness of my program easier. I'm concerned with correctness, not efficiency.
Emphasis on recursion rather than iteration. (...)
You may be easing the transition a bit too much here, I'm afraid. The very strict type system and purely functional style are characteristic of Haskell and pretty much anything resembling a mainstream programming language will require compromising at least somewhat on one of these. So, with that in mind, here are a few broad suggestions aimed at retaining most of what you seem to like about Haskell, but with some major shift.
Disregard practicality and go for "more Haskell than Haskell": Haskell's type system is full of holes, due to nontermination and other messy compromises. Clean up the mess and add more powerful features and you get languages like Coq and Agda, where a function's type contains a proof of its correctness (you can even read the function arrow -> as logical implication!). These languages have been used for mathematical proofs and for programs with extremely high correctness requirements. Coq is probably the most prominent language of the style, but Agda has a more Haskell-y feel (as well as being written in Haskell itself).
Disregard types, add more magic: If Haskell is sorcery, Lisp is the raw, primal magic of creation. Lisp-family languages (also including Scheme and Clojure) have nearly unparalleled flexibility combined with extreme minimalism. The languages have essentially no syntax, writing code directly in the form of a tree data structure; metaprogramming in a Lisp is easier than non-meta programming in some languages.
Compromise a bit and move closer to the mainstream: Haskell falls into the broad family of languages influenced heavily by ML, any of which you could probably shift to without too much difficulty. Haskell is one of the strictest when it comes to correctness guarantees from types and use of functional style, where others are often either hybrid styles and/or make pragmatic compromises for various reasons. If you want some exposure to OOP and access to lots of mainstream technology platforms, either Scala on the JVM or F# on .NET have a lot in common with Haskell while providing easy interoperability with the Java and .NET platforms. F# is supported directly by Microsoft, but has some annoying limitations compared to Haskell and portability issues on non-Windows platforms. Scala has direct counterparts to more of Haskell's type system and Java's cross-platform potential, but has a more heavyweight syntax and lacks the powerful first-party support that F# enjoys.
Most of those recommendations are also mentioned in other answers, but hopefully my rationale for them offers some enlightenment.
I'm going to be That Guy and suggest that you're asking for the wrong thing.
First you say that you want to broaden your horizons. Then you describe the kind of language that you want, and its horizons sound incredibly like the horizons you already have. You're not going to gain very much by learning the same thing over and over.
I would suggest you learn a Lisp — i.e. Common Lisp, Scheme/Racket or Clojure. They're all dynamically typed by default, but feature some sort of type hinting or optional static typing. Racket and Clojure are probably your best bets.
Clojure is more recent and has more Haskellisms like immutability by default and lots of lazy evaluation, but it's based on the Java Virtual Machine, which means it has some odd warts (e.g. the JVM doesn't support tail call elimination, so recursion is kind of a hack).
Racket is much older, but has picked up a lot of power along the way, such as static type support and a focus on functional programming. I think you'd probably get the most out of Racket.
The macro systems in Lisps are very interesting and vastly more powerful than anything you'll see anywhere else. That alone is worth at least looking at.
From the standpoint of what suits your major, the obvious choice seems like a logic language such as Prolog or its derivatives. Logic programming can be done very neatly in a functional language (see, e.g. The Reasoned Schemer) , but you might enjoy working with the logic paradigm directly.
An interactive theorem proving system such as twelf or coq might also strike your fancy.
I'd advise you learn Coq, which is a powerful proof assistant with syntax that will feel comfortable to the Haskell programmer. The cool thing about Coq is it can be extracted to other functional languages, including Haskell. There is even a package (Meldable-Heap) on Hackage that was written in Coq, had properties proven about its operation, then extracted to Haskell.
Another popular language that offers more power than Haskell is Agda - I don't know Agda beyond knowing it is dependently typed, on Hackage, and well respected by people I respect, but those are good enough reasons to me.
I wouldn't expect either of these to be easy. But if you know Haskell and want to move forward to a language that gives more power than the Haskell type system then they should be considered.
As you didn't mention any restrictions besides your subjective interests and emphasize 'rewarding to learn' (well, ok, I'll ignore the static typing restriction), I would suggest to learn a few languages of different paradigms, and preferably ones which are 'exemplary' for each of them.
A Lisp dialect for the code-as-data/homoiconicity thing and because they are good, if not the best, examples of dynamic (more or less strict) functional programming languages
Prolog as the predominant logic programming language
Smalltalk as the one true OOP language (also interesting because of its usually extremely image-centric approach)
maybe Erlang or Clojure if you are interested in languages forged for concurrent/parallel/distributed programming
Forth for stack oriented programming
(Haskell for strict functional statically typed lazy programming)
Especially Lisps (CL not as much as Scheme) and Prolog (and Haskell) embrace recursion.
Although I am not a guru in any of these languages, I did spend some time with each of them, except Erlang and Forth, and they all gave me eye-opening and interesting learning experiences, as each one approaches problem solving from a different angle.
So, though it may seem as if I ignored the part about your having no time to try a few languages, I rather think that time spent with any of these will not be wasted, and you should have a look at all of them.
How about a stack-oriented programming language? Cat hits your high points. It is:
Statically typed with type inference.
Makes you re-think common imperative languages concepts like looping. Conditional execution and looping are handled with combinators.
Rewarding - forces you to understand yet another model of computation. Gives you another way to think about and decompose problems.
Dr. Dobbs published a short article about Cat in 2008 though the language has changed slightly.
If you want a strong(er)ly typed Prolog, Mercury is an interesting choice. I've dabbled in it in the past and I liked the different perspective it gave me. It also has moded-ness (which parameters need to be free/fixed) and determinism (how many results are there?) in the type system.
Clean is very similar to Haskell, but has uniqueness typing, which are used as an alternative to Monads (more specifically, the IO monad). Uniqueness typing also does interesting stuff to working with arrays.
I'm a bit late but I see that no one has mentioned a couple of paradigms and related languages that can interest you for their high-level of abstraction and generality:
rewriting systems, like Maude or ELAN;
Constraint Handling Rules (CHR).
Despite its failure to meet one of your big criteria (static* typing), I'm going to make a case for Python. Here are a few reasons I think you should take a look at it:
For an imperative language, it is surprisingly functional. This was one of the things that struck me when I learned it. Take list comprehensions, for example. It has lambdas, first-class functions, and many functionally-inspired compositions on iterators (maps, folds, zips...). It gives you the option of picking whatever paradigm suits the problem best.
IMHO, it is, like Haskell, beautiful to code in. The syntax is simple and elegant.
It has a culture that focuses on doing things in a straightforward way, rather than focusing too minutely on efficiency.
I understand if you are looking for something else though. Logic programming, for instance, might be right up your alley, as others have suggested.
* I assume you mean static typing here, since you want to declare the types. Techincally, Python is a strongly typed language, since you can't arbitrarily interpret, say, a string as an number. Interestingly, there are Python derivatives that allow static typing, like Boo.
I would recommend you Erlang. It is not strong typed language and you should try it. It is very different approach to programming and you may find that there are problems where strong typing is not The Best Tool(TM). Anyway Erlang provides you tools for static type verification (typer, dialyzer) and you can use strong typing on parts where you gain benefits from it. It can be interesting experience for you but be prepared, it will be very different feeling. If you are looking for "conceptually interesting paradigm" you can found them in Erlang, message passing, memory separation instead sharing, distribution, OTP, error handling and error propagation instead of error "prevention" and so. Erlang can be far away from your current experience but still brain tickling if you have experience with C and Haskell.
Given your description, I would suggest Ocaml or F#.
The ML family are generally very good in terms of a strong type system. The emphasis on recursion, coupled with pattern matching, is also clear.
Where I am a bit hesitant is on the rewarding to learn part. Learning them was rewarding for me, no doubt. But given your restrictions and your description of what you want, it seems you are not actually looking for something much more different than Haskell.
If you didn't put your restrictions I would have suggested Python or Erlang, both of which would take you out of your comfort zone.
In my experience, strong typing + emphasis on recursion means another functional programming language. Then again, I wonder if that's very rewarding, given that none of them will be as "pure" as Haskell.
As other posters have suggested, Prolog and Lisp/Scheme are nice, even though both are dynamically typed. Many great books with a strong theoretical "taste" to them have been published about Scheme in particular. Take a look at SICP, which also conveys a lot of general computer science wisdom (meta-circular interpreters and the like).
Factor will be a good choice.
You could start looking into Lisp.
Prolog is a cool language too.
If you decide to stray from your preference for a type system,you might be interested in the J programming language. It is outstanding for how it emphasizes function composition. If you like point-free style in Haskell, the tacit form of J will be rewarding. I've found it extraordinarily thought-provoking, especially with regard to semantics.
True, it doesn't fit your preconceptions as to what you'd like, but give it a look. Just knowing that it's out there is worth discovering. The sole source of complete implementations is J Software, jsoftware.com.
Go with one of the main streams. Given the resources available, future marketability of your skill, rich developer ecosystem I think you should start with either Java or C#.
Great question-- I've been asking it myself recently after spending several months thoroughly enjoying Haskell, although my background is very different (organic chemistry).
Like you, C and its ilk are out of the question.
I've been oscillating between Python and Ruby as the two practical workhorse scripting languages today (mules?) that both have some functional components to them to keep me happy. Without starting any Rubyist/Pythonist debates here, but my personal pragmatic answer to this question is:
Learn the one (Python or Ruby) that you first get an excuse to apply.

Is Haskell suitable as a first language?

I have had previous exposure to imperative languages (C, some Java) however I would say I had no experience in programming. Therefore: treating me as a non-programmer, would Haskell be suitable as a first language?
My interests in Pure Mathematics and CS seem to align to the intention of most Haskell tutorials, and although i can inherently recognise the current and future industry value of imperative programming, I find the potential of functional programming (in as much as it seems such a paradigm shift) fascinating.
I guess my question can be distilled as follows - would a non-programmer have to understand imperative programming to appreciate and fully utilise functional programming?
Some references:
Are there any studies on whether functional/declarative or imperative programming is easier to learn as a first language?
Which programming languages have helped you to understand programming better?
Well, the existence of SICP suggests that functional languages can be used as introductory material. Scheme is perhaps more approachable than Haskell, however.
Haskell seems to have a reputation for being "difficult" to learn, but people tend to forget that classic imperative programming is difficult to learn as well. Many people struggle at first with the concept of assigning a value to a variable, and a surprising number of programmers never actually do become comfortable with pointers and indirect references.
The connections between Haskell and abstract mathematics don't really matter as much as people sometimes assume, but for someone interested in the math anyway, looking at the analogies might provide an interesting bonus.
There has been at least one study on the effects of teaching Haskell to beginner programmers:
The Risks and Benefits of Teaching Purely Functional Programming in First Year. Manuel M. T. Chakravarty and Gabriele Keller. Journal of Functional Programming 14(1), pp 113-123, 2004.
With the following abstract:
We argue that teaching purely
functional programming as such in
freshman courses is detrimental to
both the curriculum as well as to
promoting the paradigm. Instead, we
need to focus on the more general aims
of teaching elementary techniques of
programming and essential concepts of
computing. We support this viewpoint
with experience gained during several
semesters of teaching large first-year
classes (up to 600 students) in
Haskell. These classes consisted of
computer science students as well as
students from other disciplines. We
have systematically gathered student
feedback by conducting surveys after
each semester. This article
contributes an approach to the use of
modern functional languages in first
year courses and, based on this,
advocates the use of functional
languages in this setting.
So, yes, you can use Haskell, but you should focus on elementary, general techniques and essential concepts, rather than functional programming per se.
There are a number of popular books for beginner programmers that also make it an attractive target for teaching these elementary concepts, including:
"Programming in Haskell"
"The Craft of Functional Programming"
Additionally, Haskell is already widely taught as a first language. -- but remember, the key is to focus on the core concepts as illustrated in Haskell, not to teach the large, rich language that is Haskell itself.
I'll go against the popular opinion and say that Haskell is NOT a good first programming language for the typical first-time programmer. I don't think it is as approachable for a raw beginner as imperative languages like Ruby.
The reason for this, is that people do not think about the world in a functional manner. When they see a car driving down the street, they see the same car, with ever-changing mutable state. They don't see a series of slightly different immutable cars.
If you check out other SO questions, you'll see that Haskell is pretty much never mentioned as a good choice for a beginner.
However, if you are a mathematician, or already know enough about programming to understand the value of functional programming, I think Haskell is a fine choice.
So to summarize, I think Haskell is a perfect fit for you, but not a good fit for the typical beginner.
EDIT: Thanks for the insightful comments. Owen's point that people think in a multi-paradigm manner is very true. This strengthens my belief that a multi-paradigm language like Ruby would be easier to pick up, and has the added benefit of exposing the student to both imperative and functional thinking. Haskell is decidedly not multi-paradigm.
Chuck mentioned Haskell's sophisticated type system which is another great point. While I personally prefer statically typed languages, using a dynamic language allows a beginner to ignore that piece of the puzzle until they are curious enough to find out what is going on behind the scenes. Haskell's type system, while elegant, is in your face from day 1.
Eleven reasons to use Haskell as a mathematician
I cannot write it better than that. But to summarize:
Haskell is declarative and mathematics is the ultimate declarative language, which means that code written in Haskell is remarkably similar to what you would write as a mathematical statement.
Haskell is high-level, no need to know details about caches, memory management and all the other hardware stuff. Also that means short programs which is always good.
Haskell is great for symbolic computation, algebra, logic ...
Haskell is pretty :)
To answer your question: you'll have no problem to start with a functional language as a mathematician with no programming experience. Actually it's the better choice, you won't have to repair the brain damage you would get from C/Java/whatever.
You should also check Mathematica. Some people tend to dislike it since it is a commercial closed-source product, but I think it's a pretty good environment for doing mathematics.
If you haven't had any experience at all, it will in fact be easier for you to be productive in functional programming, especially PURE functional programming. I'm an immigrant from imperative to function, I had to deal with having to forget about 80% of what I learned to be productive in Haskell.
In contrast, it's easier to switch from functional to imperative later on.
On one hand, I think Haskell is nice as a first language, but I suppose, for anyone seriously interested in programming, it should be learned in parallel with C or after C (or an assembly). C is necessary to learn what's happening under the hood, what are the costs of doing this and that, and finally appreciate the usefulness of higher level of abstraction and automatic resource management. I think when being exposed to both C (as a low-level imperative language) and Haskell (as a high-level functional language), most students will find Haskell both practical and expressive.
On the other hand, I think that programming is a craft. It is a practical activity, and it is important to learn the joy of creating something new, useful or interesting. So you need to get things done. And the easiest way for this is using a language which has tools for your problems, i.e. libraries for your data formats, algorithms for your kind of problems. And at this point, Python (or Ruby) may be a better choice, because Hackage still lags behind PyPI in many areas (and say, how many days you need to teach a novice to manipulate an image, or to plot charts in Haskell?).
So, my opinion is that some exposure to low-level imperative programming is necessary (to OOP, probably, not). Then you can understand the value of Haskell. But to get things done, and to quickly become productive, Python is a better choice for beginners. Haskell requires a few weeks before it becomes your tool.
I would say that it is suitable as a first language, and that having learned an imperative language first would probably only interfere with the learning process (since it requires lots of unlearning first).
As a caveat, I would add that a functional language principles would probably be best understood by someone with a mathematical background, as the concepts are abstract mathematical ones.
I know that many schools do teach it as a first functional language, but not as a first language.
Yes it is. Real World Haskell is a great way to get into it http://book.realworldhaskell.org/
I would hesitantly say "yes" except for the fact that in learning, finding someone as a mentor or tutor would be a much less daunting task if you chose a more imperative language to start programming. Might I suggest R or Python (with NumPy and SciPy) instead?
No.
It's very easy for a haskell98 program to be clearly understood. LYAH is a great tutorial for people with no experience but trying to prevent a learner from stumbling on extensions x, y z is gona be tricky. Soon they start to explore and become overwhelmed with advanced programming/mathematical concepts which are much harder to understand but need to be understood to read other's code.
If every piece of haskell was written in just haskell'98/'10 I would probably say yes though.
Without necessarily addressing the question as such, I would add: if you find haskell's persnicketiness too hard, do not be discouraged.
There are other programming languages, even functional ones, which are late bound.

What is a computer programming language?

At the risk of sounding naive, I ask this question in search of a deeper understanding of the concept of programming languages in general. I write this question for my own edification and the edification of others.
What is a useful definition of a computer programming language and what are its basic and necessary components? What are the key features that differentiate languages (functional, imperative, declarative, object oriented, scripting, etc...)?
One way to think about this question. Imagine you are looking at the hardware of a modern desktop or laptop computer. Assume, that the C language or any of its variants do not exist. How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Tangentially related, what is it about computer languages that allow other languages to exist? For example take a scripting language like Javascript, Perl, or PHP. I assume part of the definition of these is that there is an interpreter most likely implemented in C or C++ at some level. Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
I would be satisfied with a list of concepts that can be looked up or researched further.
Like any language, programming languages are simply a communication tool for expressing and conveying ideas. In this case, we're translating our ideas of how software should work into a structured and methodical form that computers (as well as other humans who know the language, in most cases) can read and understand.
What is a useful definition of a computer programming language and what are its basic and necessary components?
I would say the defining characteristic of a programming language is as follows: things written in that language are intended to eventually be transformed into something that is executed. Thus, pseudocode, while perhaps having the structure and rigor of a programming language, is not actually a programming language. Likewise, UML can express many powerful ideas in an abstract manner just like a programming language can, but it falls short because people don't generally write UML to be executed.
How would you describe to others all the things needed to make the computer expressive and functional in terms of what we expect of personal computers today?
Even if the word "programming language" wasn't part of the shared vocabulary of the group I was talking to, I think it would be obvious to the others that we'd need a way to communicate with the computer. Just as no one expects a car to drive itself (yet!) without external instructions in the form of interaction with the steering wheel and pedals, no one could expect the hardware to function without being told what to do. As noted above, a programming language is the conduit through which we can make that communication happen.
Tangentially related, what is it about computer languages that allow other languages to exist?
All useful programming languages have a property called Turing completeness. If one language in the Turing-complete set can do something, then any of them can; they are said to be computationally equivalent.
However, just because they're equally "powerful" doesn't mean they're equally nice to work with for humans. This is why many people are willing to sacrifice the unparalleled micromanagement you get from writing assembly code in exchange for the expressiveness and power you get with higher-level languages, like Ruby, Python, or C#.
Is it possible to write an interpreter for Javascript in Javascript? Is this a requirement for a complete language? Same for Perl, PHP, etc?
Since there is a Javascript interpreter written in C, it follows that it must be possible to write a Javascript interpreter in Javascript, since both are Turing-complete. However, again, note that Turing-completeness says nothing about how hard it is to do something in one language versus another -- only whether it is possible to begin with. Your Javascript-interpreter-inside-Javascript might well be horrendously inefficient, consume absurd amounts of memory, require enormous processing power, and be a hideously ugly hack. But Turing-completeness guarantees it can be done!
While this doesn't directly answer your question, I am reminded of the Revenge of the Nerds essay by Paul Graham about the evolution of programming languages. It's certainly an interesting place to start your investigation.
Not a definition, but I think there are essentially two strands of development in programming languages:
Those working their way up from what the machine can do to something more expressive and less tied to the machine (Assembly, Fortran, C, C++, Java, ...)
Those going down from some mathematical or theoretical computer science concept of computation to something implementable on a real machine (Lisp, Prolog, ML, Haskell, ...)
Of course, in reality the picture is not as neat, and both strands influence each other by borrowing the best ideas.
Slightly long rant ahead.
A computer language is actually not all that different from a human language. Both are used to express ideas and concepts in commonly understood terms. Among different human languages there are syntactic differences, but you can express the same thing in every language (does that make human languages Turing complete? :)). Some languages are better suited for expressing certain things than others.
For example, although technically not completely correct, the Inuit language seems quite suited to describe various kinds of snow. Japanese in my experience is very suitable for expressing ones feelings and state of mind thanks to a large, concise vocabulary in that area. German is pretty good for being very precise thanks to largely unambiguous grammar.
Different programming languages have different specialities as well, but they mostly differ in the level of detail required to express things. The big difference between human and programming languages is mostly that programming languages lack a lot of vocabulary and have very few "grammatical" rules. With libraries you can extend the vocabulary of a language though.
For example:
Make me coffee.
Very easy to understand for a human, but only because we know what each of the words mean.
coffee : a drink made from the roasted and ground beanlike seeds of a tropical shrub
drink : a liquid that can be swallowed
swallow : cause or allow to pass down the throat
... and so on and so on
We know all these definitions by heart, but we had to learn them at some point.
In the same way, a computer can be "taught" to "understand" words as well.
Coffee::make()->giveTo($me);
This could be a perfectly valid expression in a computer language. If the computer "knows" what Coffee, make() and giveTo() means and if $me is defined. It expresses the same idea as the English sentence, just with a different, more rigorous syntax.
In a different environment you'd have to say slightly different things to get the same outcome. In Japanese for example you'd probably say something like:
コーヒーを作ってもらっても良いですか?
Kōhī o tsukuttemoratte mo ii desu ka?
Which would roughly translate to:
if ($Person->isAgreeable('Coffee::make()')) {
return $Person->return(Coffee::make());
}
Same idea, same outcome, but the $me is implied and if you don't check for isAgreeable first you may get a runtime error. In computer terms that would be somewhat analogous to Ruby's implied behaviour of returning the result of the last expression ("grammatical feature") and checking for available memory first (environmental necessity).
If you're talking to a really slow person with little vocabulary, you probably have to explain things in a lot more detail:
Go to the kitchen.
Take a pot.
Fill the pot with water.
...
Just like Assembler. :o)
Anyway, the point being, a programming language is actually a language just like a human language. Their syntax is different and specialized for the problem domain (logic/math) and the "listener" (computers), but they're just ways to transport ideas and concepts.
EDIT:
Another point about "optimization for the listener" is that programming languages try to eliminate ambiguity. The "make me coffee" example could, technically, be understood as "turn me into coffee". A human can tell what's meant intuitively, a computer can't. Hence in programming languages everything usually has one and one meaning only. Where it doesn't you can run into problems, the "+" operator in Javascript being a common example.
1 + 1 -> 2
'1' + '1' -> '11'
See "Programming Considered as a Human Activity." EWD 117.
http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
Also See http://www.csee.umbc.edu/331/current/notes/01/01introduction.pdf
Human expression which:
describes mathematical functions
makes the computer turn switches on and off
This question is very broad. My favorite definition is that a programming language is a means of expressing computations
Precisely
At a high level
In ways we can reason about them
By computation I mean what Turing and Church meant: the Turing machine and the lambda calculus have equivalent expressive power (which is a theorem), and the Church-Turing hypothesis (which is a conjecture) says roughly that there's no more powerful notion of computation out there. In other words, the kinds of computations that can be expressed in any programming languages are at best the kinds that can be expressed using Turing machines or lambda-calculus programs—and some languages will be able to express only a subset of those calculations.
This definition of computation also encompasses your friendly neighborhood hardware, which is pretty easy to simulate using a Turing machine and even easier to simulate using the lambda calculus.
Expressing computations precisely means the computer can't wiggle out of its obligations: if we have a particular computation in mind, we can use a programming language to force the computer to perform that computation. (Languages with "implementation defined" or "undefined" constructs make this task more difficult. Programmers using these languages are often willing to settle for—or may be unknowingly settling for—some computation that is only closely related to the computation they had in mind.)
Expressing computation at a high level is what programming langauges are all about. An important reason that there are so many different programming languages out there is that there are so many different high-level ways of thinking about problems. Often, if you have an important new class of problems to solve, you may be best off creating a new programming language. For example, Larry Wall's writing suggests that solving a class of problems called "systems administration" was a motivation for him to create Perl.
(Another reason there are so many different programming languages out there is that creating a new language is a lot of fun, and anyone can learn to do it.)
Finally, many programmers want languages that make it easy to reason about programs. For example, today a student of mine implemented a new algorithm that made his program run over six times faster. He had to reason very carefully about the contents of C arrays to make sure that the new algorithm would do the same job the old one did. Luckily C has decent tools for reasoning about programs, for example:
A change in a[i] cannot affect the value of a[i-1].
My student also applied a reasoning principle that isn't valid in C:
The sum of of a sequence unsigned integers will be at least as large as any integer in the sequence.
This isn't true in C because the sum might overflow. One reason some programmers prefer languages like Standard ML is that in SML, this reasoning principle is always valid. Of languages in wide use, probably Haskell has the strongest reasoning principles Richard Bird has developed equational reasoning about programs to a high art.
I will not attempt to address all the tangential details that follow your opening question. But I hope you will get something out of an answer that aims to give a deeper understanding, as you asked, of a fundamental question about programming languages.
One thing a lot of "IT" types forget is that there are 2 types of computer programming languages:
Software programming languages: C, Java, Perl, COBAL, etc.
Hardware programming languages: VHDL, Verilog, System Verilog, etc.
Interesting.
I'd say the defining feature of a programming language is the ability to make decisions based on input. Effectively, if and goto. Everything else is lots and lots of syntactic sugar. This is the idea that spawned Brainfuck, which is actually remarkably fun to (try to) use.
There are places where the line blurs; for example, I doubt people would consider XSLT to really be a programming language, but it's Turing-complete. I've even solved a Project Euler problem with it. (Very, very slowly.)
Three main properties of languages come to mind:
How is it run? Is it compiled to bare metal (C), compiled to mostly bare metal with some runtime lookup (C++), run on a JIT virtual machine (Java, .NET), bytecode-interpreted (Perl), or purely interpreted (uhh..)? This doesn't comment much on the language itself, but speaks to how portable the code may be, what sort of speed I might expect (and thus what broad classes of tasks would work well), and sometimes how flexible the language is.
What paradigms does it support? Procedural? Functional? Is the standard library built with classes or functions? Is there reflection? Is there, ideally, support for pretty much whatever I want to do?
How can I represent my data? Are there arrays, and are they fixed-size or not? How easy is it to use strings? Are there structs or hashes built in? What's the type system like? Are there objects? Are they class-based or prototype-based? Is everything an object, or are there primitives? Can I inherit from built-in objects?
I realize the last one is a very large collection of potential questions, but it's all related in my mind.
I imagine rebuilding the programming language landscape entirely from scratch would work pretty much how it did the first time: iteratively. Start with assembly, the list of direct commands the processor understands, and wrap it with something a bit easier to use. Repeat until you're happy.
Yes, you can write a Javascript interpreter in Javascript, or a Python interpreter in Python (see: PyPy), or a Python interpreter in Javascript. Such languages are called self-hosting. Have a look at Perl 6; this has been a goal for its main implementation from the start.
Ultimately, everything just has to translate to machine code, not necessarily C. You can write D or Fortran or Haskell or Lisp if you want. C just happens to be an old standard. And if you write a compiler for language Foo that can ultimately spit out machine code, by whatever means, then you can rewrite that compiler in Foo and skip the middleman. Of course, if your language is purely interpreted, this will probably result in a stack overflow...
As a friend taught me about computer languages, a language is a world. A world of communication with that machine. It is world for implementing ideas, algorithms, functionality, as Alonzo and Alan described. It is the technical equivalent of the mathematical structures that the aforementioned scientists built. It is a language with epxressions and also limits. However, as Ludwig Wittgenstein said "The limits of my language mean the limits of my world", there are always limitations and that's how one chooses it's language that fits better his needs.
It is a generic answer... some thoughts actually and less an answer.
There are many definitions to this but what I prefer is:
Computer programming is programming that helps to solve a particular technical task/problem.
There are 3 key phrases to look out for:
You: Computer will do what you (Programmer) told it to do.
Instruct: Instruction is given to the computer in a language that it can understand. We will discuss that below.
Problem: At the end of the day computers are tools (Complex). They are there to make out life simpler.
The answer can be lengthy but you can find more about computer programming

Resources