What is the purpose of case sensitivity in languages? [duplicate] - programming-languages

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Is there any advantage of being a case-sensitive programming language?
Why are many languages case sensitive?
Something I have always wondered, is why are languages designed to be case sensitive?
My pea brain can't fathom any possible reason why it is helpful.
But I'm sure there is one out there. And before anyone says it, having a variable called dog and Dog differentiated by case sensitivity is really really bad practise, right?
Any comments appreciated, along with perhaps any history on the matter! I'm insensitive about case sensitivity generally, but sensitive about sensitivity around case sensitivity so let's keep all answers and comments civil!

It's not necessarily bad practice to have two members which are only differentiated by case, in languages which support it. For example, here's a fairly common bit of C#:
private readonly string name;
public string Name { get { return name; } }
Personally I'm quite happy with case sensitivity - particularly as it allows code like the above, where the member variable and property follow conventions anyway, avoiding confusion.
Note that case-sensitivity has a culture aspect too... not all cultures will deem the same characters to be equivalent...

One of the biggest reasons for case-sensitivity in programming languages is readability. Things that mean the same should also look the same.
I found the following interesting example by M. Sandin in a related discussion:
I used to
believe case sensitivity was a
mistake, until I did this in the case
insensitive language PL/SQL (syntax
now entierly forgotten):
function IsValidUserLogin(user:string, password :string):bool begin
result = select * from USERS
where USER_NAME=user and PASSWORD=password;
return not is_empty(result);
end
This passed unnoticed for several
months on a low-volume production
system, and no harm came of it. But it
is a nasty bug, sprung from case
insensitivity, coding conventions, and
the way humans read code. The lesson
for me was that: Things that are the
same should look the same.
Can you see the problem immediately? I couldn't...

I like case sensitivity in order to differentiate between class and instance.
Form form = new Form();
If you can't do that, you end up with variables called myForm or form1 or f, which are not as clean and descriptive as plain old form.
Case sensitivity also means that you don't have references to form, FORM and Form which all mean the same thing. I find it difficult to read such code. I find it much easier to scan code where all references to the same variable look exactly the same.

Something I have always wondered, is why are languages designed to be case sensitive?
Ultimately, it's because it is easier to correctly implement a case-sensitive comparison correctly; you just compare bytes/characters without any conversions. You can also do other things like hashing really easy.
Why is this an issue? Well, case-insensitivity is rather hard to add unless you're in a tiny domain of supported characters (notably, US-ASCII). Case conversion rules vary by locale (the Turkish rules are not the same as those in the rest of the world) and there's no guarantee that flipping a single bit will do the right thing, or that it is always the same bit and under the same preconditions. (IIRC, there's some really complex rules in some language for throwing away diacritics when converting vowels to upper case, and reintroducing them when converting to lower case. I forget exactly what the details are.)
If you're case sensitive, you just ignore all that; it's just simpler. (Mind you, you still ought to pay attention to UNICODE normalization forms, but that's another story and it applies whatever case rules you're using.)

Imagine you have an object called dog, which has a method called Bark(). Also you have defined a class called Dog, which has a static method called Bark(). You write dog.Bark(). So what's it going to do? Call the object's method or the static method from the class? (in a language where :: doesn't exist)

I'm sure originally it was a performance consideration. Converting a string to upper or lower case for caseless comparison isn't an expensive operation exactly, but it's not free either, and on old systems it may have added complexity that the systems of the day weren't ready to handle.
And now, of course, languages like to be compatible with each other (VB for example can't distinguish between C# classes or functions that differ only in case), people are used to naming things the same text but with different cases (See Jon Skeet's answer - I do that a lot), and the value of caseless languages wasn't really enough to outweigh these two.

The reason you can't understand why case-sensitivity is a good idea, is because it is not. It is just one of the weird quirks of C (like 0-based arrays) that now seem "normal" because so many languages copied what C did.
C uses case-sensitivity in indentifiers, but from a language design perspective that was a weird choice. Most languages that were designed from scratch (with no consideration given to being "like C" in any way) were made case-insensitive. This includes Fortran, Cobol, Lisp, and almost the entire Algol family of languages (Pascal, Modula-2, Oberon, Ada, etc.)
Scripting languages are a mixed bag. Many were made case-sensitive because the Unix filesystem was case-sensitive and they had to interact sensibly with it. C kind of grew up organically in the Unix environment, and probably picked up the case-sensitive philosophy from there.

Case-sensitive comparison is (from a naive point of view that ignores canonical equivalence) trivial (simply compare code points), but case-insensitive comparison is not well defined and extremely complex in all cases, and the rules are impossible to remember. Implementing it is possible, but will inadvertedly lead to unexpected and surprising behavior. BTW, some languages like Fortran and Basic have always been case-insensitive.

Related

APL readability

I have to code in APL. Since the code is going to be maintained for a long time, I am wondering if there are some papers/books which contain heuristics/tips/samples to help in designing clean and readable APL programs.
It is a different experience than coding in other programming language. Making a function, for example. Small will not help: such a function can contain one line of code, which is completely incomprehensible.
First, welcome to the wonderful world of APL.
Writing readable and maintainable APL code is not much different than writing readable and maintainable code in any language. Any good book on writing clean code is as applicable to APL as any other language, perhaps even more so. I recommend Clean Code by Robert C. Martin.
Consider the guideline in this book that all code in a function should be at the same level of abstraction. This applies to APL 100 times over. For example, if you have a function named DoThisBigTask it should have very few APL primitive symbols in it, and certainly no long complex one-liners. It should just be series of calls to other, lower level functions. If these higher-level functions are all well-named and well-defined, the general drift should be easily determined by someone who does not even know APL. The lowest level functions will be nothing but primitives and will be inscrutable to the non-APLer. Depending on how they are written they may even initially appear inscrutable to a seasoned APLer. However, these low level functions should be short, have no side effects, and can easily be re-written rather than modified if the maintaining programmer is unable to understand the original coding technique.
In general, keep your functions short, well-named, well-defined, and to the point. And keep the lines of code even shorter. It is much more important to have well-defined and well-documented functions than it is to have well-written or well document lines of code.
Since you asked for books and other references, I can suggest:
APL2 in Depth by Norman D. Thomson and Raymond P. Polivka. I worked with Ray Polivka for years and he was one of the best APL teachers I
have ever known.
The classic A. P. L.: An Interactive Approach by
Leonard Gilman and Allen J. Rose is good for the core language, but
is rather outdated and doesn't contain much that is truly relevant on
readability.
APL 2 at a Glance by James A. Brown and Sandra Pakin serves in some ways as an update to Gilman and Rose. It covers nested operations and other updates to APL, but has not much specifically directed at readability. Still, if you follow the examples here you will be writing readable code.
APL is Easy by STSC and Jerry R. Turner is an intro directed specifically at the APL*Plus line. Again, not much specifically on readability, but the models are generally well-designed readable code.
Mastering Dyalog APL: A Complete Introduction to Dyalog APL by Bernard Legrand is quite good if you are specifically workign in Dyalog APL, not so much if you are working in one of the other versions such as APL*Plus (from APL2000)
It is my view that the reputation of APL as a "write-only language" is much overstated. One does need to get used to the primitives and the symbols used to represent them. But then one needs to get used to the syntax and the various library functions in many other language environments. I have seen convoluted code in C, C++, and Java as hard to follow as any APL. Of course, it isn't good C, C++, or Java, even if it is clever.
Some advice:
Writing 'one-liners' is a way to test one's mastery of the language,
but is very poor practice for production code.
Comment to make the algorithm and especially the data structure being used clear. As with any code, comments should add something
that cannot be easily read from the code itself, or call attention to
complex or obscure code.
If possible avoid obscure code so there is no need to explain it. It is usually possible.
Make each function do one and only one job, with a clear interface.
Avoid global variables for the most part, and document any that are needed.
Document the interface, purpose, and efect of any function at the
top. Make utilities black boxes without side-effects if possible. If
side-effects are essential, document those as part of the interface.
Develop a standard header comment structure.
Dynamic code built on-the-fly can add flexabiliy to a solution, but
is often much harder to debug if problems occur. Make such code
bullet-proof to the extent you can, and build in optional logging to
help when it turns out to have problems anyway.
You can use an OOP-like style if you wish. But there is no need to do so. If you do, it should IMO be used fairly pervasively through an application, except perhaps for low-level utilities. But OOP-style code can be at least as convoluted as non-OOP code, and APL doesn't have built-in inheritance or other OOP-supporting syntax.
(I'll use here "A" instead of comment, "'" instead of symbol sign.)
Well, I was developing APL for a year, I have only used Aplusdev.org.
You don't even need more. The trick is to try to think OOP-like. You should have -- if I remember well -- structured fields used as class data, sth like {'attribute1 'attribute2, {value,value2}}, so you can easily pick them out like obj.attribute1 in c++.
(here 'attribute Pick object, use only in class functions :) )
Moreover, use namespaced functions:
namespace_classname.method(this, arg1)
namespace_classname._private_method(this, arg1, arg2)
and lots of simple tool functions instead of nifty, long lines. The performance drop is not substantial, you can optimize later for say arrays once you see something could be faster.
And before anything: think matlab and mathematica without for loops! :) It helps a lot.
My suggestions for robust, maintainable code:
use extensive set of utility functions instead of trickery with those unreadable symbols to make your code always to the point.
try-catch blocks there is a built in exception handling, which can be utilized here,
try_begin();
A tried code, maybe in extra brackets not to forget try_end() at the end.
try_end();
catch(sth, function_here);
can be nicely implemented. (You'll see, catching errors is very important)
crude type checking : implement a standard and use for not-so-many times called functions... (you can put a function with flexible parameters right after a function definition)
Syntax:
function(point2i, ch):
{
typecheck({{'int, [1 2]}, 'char}); A do some assertions in typecheck...
// your function goes here
}
lambda functions can be very effective, you can do some reflections to achieve lambdas.
always declare returns with saying "return"!
Unit tests based on try-catch testing each and every function you write.
I also used a lot of 'apply' and 'map' from mathematica, implementing my own version, they are very-very effective here.
I wrote matlab thinking since you can here have a list of structured fields (=class data) in a variable. You will write lots of those if you wanna keep things for-loop-less (and you wanna, trust me). For that you need to have a standard naming convention say indicate with plurals:
namespace_class.method(objects, arg1, arg2)
To the end: also, I wrote inputBox and messageBox like the ones in Javascript or VisualBasic, they will make very easy hacking together simple tools or checking states. The only catch of messageBox, that it can't put the function-flow on hold,
so you need
AA documentation of f1
f1():
{
A do sth
msgbox.call("Hi there",{'Ok, {'f2}});
}
f2():
{
A continue doing stuff
}
You can write auto-docs in bash with a gawk/sed combination to put it into a webpage.
Also creating HTML formatted code helps in printing. ;)
I hope this was good outline for a proper build-up. Before writing own tools, try to dig up the available tools from the legacy codebase... functions are often even 4 times implemented with different names due to the mess that time.

Can anyone explain the design decisions behind Autolisp/visual lisp to me?

I wonder can anyone explain the design rationale behind the following features of autolisp / visual lisp? To me they seem to fly in the face of accepted software practice ... am I missing something?
All variables are global by default (ie unless placed after a / in the function arguments)
Reading/writing data from autocad requires putting stuff into an association list with lots of magic numbers. 10 means x/y coordinates, 90 means length of the coordinate list, 63 means colour, etc. Ok you could store these in some constants but that would mean yet more globals, and the documentation encourages you to use the magic numbers directly.
Lisp is a functional-style language, which encourages programming by recursion over iteration, but tail recursion is afaik not optimised in visual lisp leading to horrendous call stacks - unless, of course you iterate. But loop syntax is very restrictive; e.g. you can't break out of or return a value from a loop unless you put some kind of flag in the termination condition. Result, ugly code.
Generally you are forced to declare variables all over the place which flies in the face of functional programming - so why use a functional(-ish) language?
Lisp isn't a language, it's a group of sometimes surprisingly different languages. Scheme and Clojure are the functional members of the family. Common Lisp, and the more specialized breeds like Elisp aren't particularly functional and don't inherently encourage functional programming or recursion. CL in fact includes a very flexible object system, an extremely flexible iteration DSL, and doesn't guarantee optimized tail calls (Scheme dialects do, but not Lisps in general; that's the pitfall in thinking of "Lisp" as a single language).
Now that we have that cleared up, AutoLisp is an implementation from 1986 based on an early version of XLISP (the earliest of which was published in 1983).
The reason that it might fly in the face of currently accepted programming practice is that it predates currently accepted programming practice. Another thing to keep in mind is that the cheapest netbook available today is several hundred times more powerful than what a programmer could expect to have access to back in the mid 80s. Meaning that even if a given feature was accepted to be excellent, CPU or memory constraints may have prevented its implementation in a commercial language.
I've never programmed in Autolisp/Visual Lisp specifically, and the stuff you cite sounds bloody annoying, but it may have had some performance/memory advantage that justified it at the time.
If I remember correctly, AutoLisp is a fork from an early version of XLisp (some sources claim it was XLisp 1.0 (see this C2 article).
XLisp 1.0 is a 1-cell lisp (functions and variables share the same name-space) with some rather odd oddities to it.
You can add dynamic scoping into the mix btw, and if you don't know what it is consider yourself lucky. But actually not all your four points are that big of a deal IMO:
"Undeclared vars are created automatically as global." Same as in CL is it not (via setq)? The other option is to fail, and that's not a very attractive one for the language which is supposed to be used for quick-n-dirty scripting.
"magic numbers" are DXF-codes, which you're right are major inconvenience as they tend to change with the changing ACAD versions sometimes (thankfully, rarely). That's just how it is. Fixing it would require a major overhaul, introducing some "schemas" and what not, and why would "they" bother? AutoLISP was left in its state as of 1992 approximately, and never bothered with since. Visual LISP itself is entirely different and much more capable system, but it is all locked out for the regular user, and only made to serve one goal - to emulate the old AutoLISP as faithfully as possible (except where it added new VBA-related features in the later half of the 1990s, and was locked since then too).
(while (not done) ...) is not that ugly. There's no tail optimization guarantee, yes, just as there isn't one in CL and Haskell (that last one really stumbles me - there's no guaranteed way to encode a loop in Haskell in constant space without monads - how about that?).
"you're forced to declare vars all over the place" here I do not follow you. You declare them were you supposed to declare them - in the function's internal arguments list. What other places do you mean? I don't know of any.
In reality the biggest stumbling block of AutoLISP is its dynamic name resolution IMO, but that's how it was in Xlisp, only few years after Scheme first came out. Then also it's its immutable data store, but that was done mainly for simplicity of implementation, and to prevent too much confusion and hence questions, from the user base, I guess.

Is similarity to "natural language" a convincing selling point for a programming language? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Look, for example at AppleScript (and there are plenty of others, some admittedly quite good) which advertise their use of the natural language metaphor. Code is apparently more readable because it can be/is intended to be constructed in English-like sentences, says they. I'm sure there are people who would like nothing better than to program using only English sentences. However, I have doubts about the viability of a language that takes that paradigm too far (excepting niche cases).
So, after a certain reasonable point, is natural-languaginess a benefit or a misfeature? What if the concept is carried to an extreme -- will code necessarily be more readable? Or might it be unnecessarily long, difficult to work with, and just as capable of producing hilarity on the scale of obfuscated Perl, obfuscated C, and eye-twisting Bash script logorrhea?
I am aware of some specialty cases like "Inform" that are almost pure English, but these have a niche that they're not likely to venture out from. I hear and read about how great it would be for code to read more like English sentences, but are there discussions of the possible disadvantages? If everyday language is so clear, simple, clean, lovely, concise, understandable, why did we invent mathematical notation in the first place?
Is it really easier to describe complex instructions accurately and precisely to a machine in natural language, or isn't something closer to mathematical markup a much better choice? Where should that line be drawn? And finally, are you attracted to languages that are touted as resembling English sentences? Should this whole question have just been a one liner:
naturalLanguage > computerishLanguage ? booAndHiss : cheerLoudly;
Well, of course, natural languages are rarely clear, simple, clean, lovely, concise, understandable which is one of the reasons that most programming is done in languages far from natural.
My answer to this would be that the ideal programming language lies somewhere between a natural language and a very formal language.
On the one extreme, there's the formal, minimal, mathematical languages. Take for example Brainfuck:
,>++++++[<-------->-],[<+>-]<. // according to Wikipedia, this means addition
Or, what's somewhat preferable to the above mess, any type of lambda calculus.
λfxy.x
λfxy.y
This is one possible way of expressing the Boolean truth values in lambda calculus. Doesn't look very neat, especially when you build logical operators (such as AND being e.g. λpq.pqp) around them.
I claim that most people could not write production code in such a minimalistic, hard-to-grasp language.
The problem on the other end of the spectrum, namely natural languages as they are spoken by humans, is that languages with too much complexity and flexibility allows the programmer to express vague and indefinite things that can mean nothing to today's computers. Let's take this sample program:
MAYBE IT WILL RAIN CATS AND DOGS LATER ON. WOULD YOU LIKE THIS, DEAR COMPUTER?
IF SO, PRINT "HELLO" ON THE SCREEN.
IF YOU HATE RAIN MORE THAN GEORGE DOES, PRINT SOME VAGUE GARBAGE INSTEAD.
(IN THE LATTER CASE, IT IS UP TO YOU WHERE YOU OUTPUT THAT GARBAGE.)
Now this is an obvious case of vagueness. But sometimes you would get things wrong with more reasonable natural language programs, such as:
READ AN INTEGER NUMBER FROM THE TERMINAL.
READ ANOTHER INTEGER NUMBER FROM THE TERMINAL.
IF IT IS LARGER THAN ZERO, PRINT AN ERROR.
Which number is IT referring to? And what kind of error should be printed (you forgot to specify it.) — You would have to be really careful to be extremely explicit about what you mean.
It's already too easy to mis-understand other humans. How do you expect a computer to do better?
Thus, a computer language's syntax and grammar has to be strict enough so that it doesn't allow ambiguity. A statement must evaluate in a deterministic way. (There are maybe corner cases; I'm talking about the general case here.)
I personally prefer languages with a very limited set of keywords. You can quickly learn such a language, and you don't have to choose between 10,000 ways of achieving one goal simply because there's 10,000 keywords for doing the same thing (as in: GO/WALK/RUN/TROD/SLEEPWALK/etc. TO THE FRIDGE AND GET ME A BEER!). It means if you need to think about 10,000 different ways of doing something, it won't be due to the language, but due to the fact that there are 9,999 stupid ways to do it, and 1 elegant solution that just shines more than all the others.
Note that I wrote all natural language examples in upper-case. That's because I sort of had good old GW-BASIC and COBOL in mind while I wrote this. There've been some examples of programming languages that lean on natural language, and I think history has shown that they are, in general, somewhat less widespread than e.g. terse C-style languages.
I recently read that according to Gartner there are over 400 billion lines of COBOL source code in active use worldwide today.
That doesn't prove anything other than that banks and governments are fond of their legacy code, but you could construe it as a testament to the success of English-like programming languages. I'm not aware of any other programming language that is so close to English and so verbose.
Aside from that, I tend to agree with the other respondents: Programmers prefer not to type so much, and in general a language based on mathematics-like shorthand is both more expressive and more precise than one based on English.
There's a point where terse, expressive code looks like line noise. Perl, APL and J come to mind as examples with "illegible one-liners." Programmers are humans, and it may be beneficial to leave them with some similarity to natural language to give their brains something familiar to hold on to. Thus, I propagate a happy medium that's reminiscent of but not too close to natural language.
"When a programming language is created that allows programmers to program in simple English, it will be discovered that programmers cannot speak English." ~ Unknown
In my (not so) humble opinion, no.
Natural language is full of ambiguities. Normally we do not think of them because humans can easily disambiguate them, based on many criteria often unavailable to the computer. First off we have knowledge about the world (elephants don't fit in pajamas), but also we use more senses than just hearing when we speak to each other, body language to name one. The intonation and manner things are said with also helps alot to disambiguate. It is harder to catch irony or sarcasm in written text, which is more or less a transcription of what we would say, more in the case IM less in the case of well written articles. In general there is loads and loads of ambiguity in natural language, for instance where the PPs, prepositional phrases attach:
"Workers [dumped [sacks [with flour]]]"
"Workers [dumped [sacks] [with a fork-lift]]]"
Any human immediatly tells where the PP will attach, its reasonable to have sacks with flour in them, and its reasonable to use a fork-lift to dump something. Another very troublesome area is the word "and" which messes up the grammar horrendously, or all the references we use, the pronouns in general, but also more complex references, ie. "Bill bought a Dodge Viper, sadly the car was a lemon".
So we have three options, keep the ambiguities in and try to deal with them, accepting very many errors in disambiguation and very very slow parsing, no LALR or LL will work here, or try to make an artifical grammar resembling natural language, and keeping it deterministic, which is more reasonable but still horrible. We now have a language that falsely resembles English, but it isn't which is confusing. We have none of the benefits of a proper syntax and none of the benefits of natural language, but an oversized overwordly monstrum, with a diffcult and unintuitive grammar, diffcult to learn and slow to write.
The third way is realizing we need a succinct way of expressing ourselves, which can also be processed by a computer, not resembling any natural language, but focusing on being an unambigous description of an algorithm. This will increase the readability, especially if we compare to a very precise natural language counter part. This is why many people prefer to also read the pseudo-code when dealing with difficult problems or advanced algorithms, it relieves us of the trouble with dealing with ambiguities, and is more optimal for expressing computer instructions.
The issue isn't so much that it's easier to describe complex ideas using one approach or the other, but it certainly is easier understanding machine languages (at least for machines). The biggest issue is, as always, ambiguity. Computers are terrible at understand it, so most grammars for programming languages need to be constructed to either remove all ambiguity, or the general language must be constructed so that ambiguity isn't actually a problem (this is tricky).
Any programming language that allows for ambiguity would be terribly error prone; and any natural language that doesn't allow ambiguity would be terribly verbose and convoluted (I'm looking at you, Lojban [ok, maybe Lojban isn't so bad‚ still…]).
The propensity some people show for preferring natural languages for programming languages might essentially root out in the desire to eventually be able to input a physics textbook into a parser, whereupon it'll do your homework when asked.
Of course, that's not to say that programming languages shouldn't have hints of natural language: Especially for OOP it makes good sense to have calling grammar resemble natural grammar, like in Obj-C, which is sort of a game of mad libs:
[pot makeCoffee:strong withSugar:NO];
Doing the same in BrainFuck would be, well, a brainfuck, three full pages of code to flip a switch will do that to you.
In essensce; the best languages are (probably) the ones that resemble natural languages, without pretending to be one. (Avoiding the uncanny valley of programming languages, [if there is such a thing] if you will. [Subclauses! Yay!])
A natural language is too ambiguous to be used as programming language. It has to be artificially constrained to eliminate ambiguities.
But it defeats the purpose of having a "natural" programming language, because you have its verbosity and none of its advantages in expressibility.
I think the fourth language I coded professionally in (after Fortran, Pascal and Cobol) was Natural. Which is a pretty obscure 4GL of 1980's vintage for developing mainframe systems against an ADABAS database.
Called Natural I believe because it had pretensions to be so. Supposedly management-readable like cobol, but minus the fluff.
Which should tell you that attempts at 'Natural' programming languages have a commercial history of over 30 years now (more if you count cobol) but they have pretty much lost out to languages that don't pretend to be 'natural' but do allow the programmer to define the problem succinctly. When I first started coding the 1GL -> 2GL -> 3GL evolution wasn't that old and the progression to 4GL (defined then as a more english-like programming languages) for mainstream work seem an obvious next step. It hasn't worked out that way. If anything getting up to speed with coding now has got harder because there's more abstract concepts to learn.
SQL was designed with natural language in mind originally. Fortunately it hasn't held on too tightly to this and advances since its conception are less "naturalistic".
But anybody that has tried to write a complicated query in SQL will tell you that its not that easy. You have worry about the range of some keywords over your query. You have this incredibly hard to understand query, that does some crazy shit, but you re-write it every time you need to change something because its easier.
Natural language programming is a bad idea. the further you get from assembly, the more mistakes you can make, not in terms of logical errors or anything like that, but in terms of having the wrong assumption about how the script interpreter/bytecode intepreter/compiler makes your code run on the CPU.
Is seems to be a great feature for beginners, or people who program as a "secondary activity". But I doubt you could reach the complexity and polyvalence of actual programming languages with natural language.
If there was a programming language that actually adhered to all of the conventions of the natural language it mimics, then that would be fantastic.
In reality, however, a lot of so-called "natural" programming languages have far stricter syntax than English, which means that although they are easily readable, it is debatable whether they are actually all that easy to write.
What makes sense in English is often a syntax error in AppleScript.
Everyday language isn't so clear, simple, clean, lovely, concise and understandable - to a computer. However, to a human, readability counts for a lot, and the closer you get to a natural language, the easier it is to read. That's why we're not all using assembly language.
If you have a completely natural language, there are a lot of things that need to be handled - the sentence needs to be parsed, each word must be understood - and there is plenty of room for ambiguity. That's generally not a good thing for a programming language, because then we're venturing into psychic programming - the computer has to figure out what you were thinking, which is not at all easy to get.
However, if you can make something sufficiently close to natural language - and yes, Inform 7 is probably the best example - so sentences look natural, but still have some structure you need to follow - then the code is almost instantly readable, even to people that don't know the language. There's usually also less specialized syntax to remember - because you're really just talking (a slightly modified form of) English - but if you have to do something out of the ordinary, then you might have to jump through some hoops to do that.
In practice, most languages don't bother with this, because that makes it easier for them to allow you to be precise. However, some will still hover closer to the "natural language". This can be a good thing: if you have to translate some pseudocode algorithm to a language, you don't need to manipulate it as much to make it work, reducing the risk that you make an error in the translation.
As an example, let's compare C and Pascal. This Pascal code:
for i := 1 to 10 do begin
j := j + 1;
end;
is equivalent to this C code:
for (i = 1; i <= 10; i++) {
j = j + 1;
}
If you had no prior knowledge of either syntax, the Pascal version is generally going to be simpler to read, if only because it's not as complex as a C for.
Let's also consider operators. Pascal and C both share +, - and *. They also both have /, but with different semantics: In C, / does an integer division if both operands are integers; in Pascal, it always does a "real" division and uses div for integer division. That means that you have to take the types into account when figuring out what actually happens in that line of code.
C also has a bunch of other operators: &&, ||, &, |, ^, <<, >> - in Pascal, those operators are instead named and, or, and, or, xor, shl, shr. Instead of relying on some semi-arbitrary sequence of characters, it's spelled out more. It's instantly obvious that xor is - well, XOR - unlike the C version, where there's no obvious correlation between ^ and XOR.
Of course, this is to some degree a matter of opinion: I much prefer a Pascal-like syntax to a C-like syntax, because I think it's more readable, but that doesn't mean everyone else does: A more natural language is usually going to be more verbose, and some people simply dislike that extra level of verbosity.
Basically, it's a matter of choosing what makes the most sense for the problem domain: if the problem domain is very limited (like with Inform), then a natural language makes perfect sense. If it's a very generic domain (like with C), then you either need far more advanced processing than we are currently capable of, or a lot of verbosity to fill in the details - and in that case, you have to choose a balance depending on what sort of users will be using the languages (for regular people, you need more naturalness, for people who know programming, they're usually comfortable enough with less natural languages and will prefer something closer to that end).
I think the question is, who reads and who writes the application code in question? I think, regardless of the language or architecture, a trained software developer should be writing the code, and analyze the code as bugs arise.

Why are there not more control structures in most programming languages?

Why do most languages seem to only exhibit fairly basic control structures from a logic point of view? Stuff like If ... then, Else..., loops, For each, switch statement, etc. The standard list seems fairly basic from a logic point of view.
Why is there not much more in the way of logic syntactical sugar? Perhaps something like a proposition engine, where you could feed an array of premises or functions that return complicated self referential interdependent functions and results. Something where you could chain together a complex array of conditions, but represented in a way that was easy and clear to read in the code.
Premise 1
Premise 2 if and only if Premise 1
Premise 3
Premise 4 if Premise 2 and Premise 3
Premise 5 if and only if Premise 4
etc...
Conclusion
I realize that this kind of logic this can be constructed in functions and/or nested conditional statements. But why are there not generally more syntax options for structuring these kind of logical propositions without resulting in hairy looking conditional statements that can be hard to read and debug?
Is there an explanation for the kinds of control structures we typically see in mainstream programming languages? Are there specific control structures you would like to see directly supported by a language's syntax? Does this just add unnecessary complexity to the language?
Have you looked a Prolog? A Prolog program is basically a set of rules that is turned into one big evaluation engine.
From my personal experience Prolog is a bit too weird and I actually prefer ifs, whiles and so on but YMMV.
Boolean algebra is not difficult, and provides a solution for any conditionals you can think of, plus an infinite number of other variants.
You might as well ask for special syntax for "commonly-used" arithmetic expressions. Who is to say what qualifies as commonly-used? And where do you stop adding special-case syntax?
Adding to the complexity of a language parser is not preferable to using constructive expression syntax, combined with extensibility through defining functions.
It's been a long time since my Logic class in college but I would guess it's a mixture of difficulty in writing them into the language vs. the frequency with which they'd be used. I can't say I've ever had the need for them (not that I can recall). For those times that you would require something of that ilk the language designers probably figure you can work out the logic yourself using just the basic structures.
Just my wild guess though.
Because most programming languages don't provide sufficient tools for users to implement them, it is not seen as an important enough feature for the implementer to provide as an extension, and it isn't demanded enough or used enough to be added to the standard.
If you really want it, use a language that provides it, or provides the tools to implement it (for instance, lisp macros).
It sounds as though you are describing a rules engine.
The basic control algorithms we use mirror what processor can do efficiently. Basicly this boils down to simple test-and-branches.
It may seem limiting to you, but many people don't like the idea of writing a simple-looking line of code that requires hundreds or thousands (or millions) of processor cycles to complete. Among these people are systems software folks, who write things like Operating Systems and compilers. Naturally most compilers are going to reflect their own writer's concerns.
It relates to the concern regarding atomicity. If you can express A,B,C,D in simpler structures Y, Z, why not simply not supply A,B,C,D but supply Y, Z instead?
The existing languages reflect 60 years of the tension between atomicity and usability. The modern approach is "small language, large libraries". (C#, Java, C++, etc).
Because computers are binary, all decisions must come down to a 1/0, yes/no, true/false, etc.
To be efficient, the language constructs must reflect this.
Eventually all your code goes down to a micro-code that is executed one instruction at a time. Until the micro-code and accompanying CPU can describe something more colorful, we are stuck with a very plain language.

What is the worst programming language you ever worked with? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 13 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
If you have an interesting story to
share, please post an answer, but
do not abuse this question for bashing
a language.
We are programmers, and our primary tool is the programming language we use.
While there is a lot of discussion about the best one, I'd like to hear your stories about
the worst programming languages you ever worked with and I'd like to know exactly what annoyed you.
I'd like to collect this stories partly to avoid common pitfalls while designing a language (especially a DSL) and partly to avoid quirky languages in the future in general.
This question is not subjective. If a language supports only single character identifiers (see my own answer) this is bad in a non-debatable way.
EDIT
Some people have raised concerns that this question attracts trolls.
Wading through all your answers made one thing clear.
The large majority of answers is appropriate, useful and well written.
UPDATE 2009-07-01 19:15 GMT
The language overview is now complete, covering 103 different languages from 102 answers.
I decided to be lax about what counts as a programming language and included
anything reasonable. Thank you David for your comments on this.
Here are all programming languages covered so far
(alphabetical order, linked with answer, new entries in bold):
ABAP,
all 20th century languages,
all drag and drop languages,
all proprietary languages,
APF,
APL
(1),
AS400,
Authorware,
Autohotkey,
BancaStar,
BASIC,
Bourne Shell,
Brainfuck,
C++,
Centura Team Developer,
Cobol
(1),
Cold Fusion,
Coldfusion,
CRM114,
Crystal Syntax,
CSS,
Dataflex 2.3,
DB/c DX,
dbase II,
DCL,
Delphi IDE,
Doors DXL,
DOS batch
(1),
Excel Macro language,
FileMaker,
FOCUS,
Forth,
FORTRAN,
FORTRAN 77,
HTML,
Illustra web blade,
Informix 4th Generation Language,
Informix Universal Server web blade,
INTERCAL,
Java,
JavaScript
(1),
JCL
(1),
karol,
LabTalk,
Labview,
Lingo,
LISP,
Logo,
LOLCODE,
LotusScript,
m4,
Magic II,
Makefiles,
MapBasic,
MaxScript,
Meditech Magic,
MEL,
mIRC Script,
MS Access,
MUMPS,
Oberon,
object extensions to C,
Objective-C,
OPS5,
Oz,
Perl
(1),
PHP,
PL/SQL,
PowerDynamo,
PROGRESS 4GL,
prova,
PS-FOCUS,
Python,
Regular Expressions,
RPG,
RPG II,
Scheme,
ScriptMaker,
sendmail.conf,
Smalltalk,
Smalltalk ,
SNOBOL,
SpeedScript,
Sybase PowerBuilder,
Symbian C++,
System RPL,
TCL,
TECO,
The Visual Software Environment,
Tiny praat,
TransCAD,
troff,
uBasic,
VB6
(1),
VBScript
(1),
VDF4,
Vimscript,
Visual Basic
(1),
Visual C++,
Visual Foxpro,
VSE,
Webspeed,
XSLT
The answers covering 80386 assembler, VB6 and VBScript have been removed.
PHP (In no particular order)
Inconsistent function names and argument orders
Because there are a zillion functions, each one of which seems to use a different naming convention and argument order. "Lets see... is it foo_bar or foobar or fooBar... and is it needle, haystack or haystack, needle?" The PHP string functions are a perfect example of this. Half of them use str_foo and the other half use strfoo.
Non-standard date format characters
Take j for example
In UNIX (which, by the way, is what everyone else uses as a guide for date string formats) %j returns the day of the year with leading zeros.
In PHP's date function j returns the day of the month without leading zeros.
Still No Support for Apache 2.0 MPM
It's not recommended.
Why isn't this supported? "When you make the underlying framework more complex by not having completely separate execution threads, completely separate memory segments and a strong sandbox for each request to play in, feet of clay are introduced into PHP's system." Link So... it's not supported 'cause it makes things harder? 'Cause only the things that are easy are worth doing right? (To be fair, as Emil H pointed out, this is generally attributed to bad 3rd-party libs not being thread-safe, whereas the core of PHP is.)
No native Unicode support
Native Unicode support is slated for PHP6
I'm sure glad that we haven't lived in a global environment where we might have need to speak to people in other languages for the past, oh 18 years. Oh wait. (To be fair, the fact that everything doesn't use Unicode in this day and age really annoys me. My point is I shouldn't have to do any extra work to make Unicode happen. This isn't only a PHP problem.)
I have other beefs with the language. These are just some.
Jeff Atwood has an old post about why PHP sucks. He also says it doesn't matter. I don't agree but there we are.
XSLT.
XSLT is baffling, to begin with. The metaphor is completely different from anything else I know.
The thing was designed by a committee so deep in angle brackets that it comes off as a bizarre frankenstein.
The weird incantations required to specify the output format.
The built-in, invisible rules.
The odd bolt-on stuff, like scripts.
The dependency on XPath.
The tools support has been pretty slim, until lately. Debugging XSLT in the early days was an exercise in navigating in complete darkness. The tools change that but, still XSLT tops my list.
XSLT is weird enough that most people just ignore it. If you must use it, you need an XSLT Shaman to give you the magic incantations to make things go.
DOS Batch files. Not sure if this qualifies as programming language at all.
It's not that you can't solve your problems, but if you are used to bash...
Just my two cents.
Not sure if its a true language, but I hate Makefiles.
Makefiles have meaningful differences between space and TAB, so even if two lines appear identical, they do not run the same.
Make also relies on a complex set of implicit rules for many languages, which are difficult to learn, but then are frequently overridden by the make file.
A Makefile system is typically spread over many, many files, across many directories.
With virtually no scoping or abstraction, a change to a make file several directories away can prevent my source from building. Yet the error message is invariably a compliation error, not a meaningful error about make, or the makefiles.
Any environment I've worked in that uses makefiles successfully has a full-time Make expert. And all this to shave a few minutes off compilation??
The worse language I've ever seen come from the tool praat, which is a good audio analysis tool. It does a pretty good job until you use the script language. sigh bad memories.
Tiny praat script tutorial for beginners
Function call
We've listed at least 3 different function calling syntax :
The regular one
string = selected("Strings")
Nothing special here, you assign to the variable string the result of the selected function. Not really scary... yet.
The "I'm invoking some GUI command with parameters"
Create Strings as file list... liste 'path$'/'type$'
As you can see, the function name start at "Create" and finish with the "...". The command "Create Strings as file list" is the text displayed on a button or a menu (I'm to scared to check) on praat. This command take 2 parameters liste and an expression. I'm going to look deeper in the expression 'path$'/'type$'
Hmm. Yep. No spaces. If spaces were introduced, it would be separate arguments. As you can imagine, parenthesis don't work. At this point of the description I would like to point out the suffix of the variable names. I won't develop it in this paragraph, I'm just teasing.
The "Oh, but I want to get the result of the GUI command in my variable"
noliftt = Get number of strings
Yes we can see a pattern here, long and weird function name, it must be a GUI calling. But there's no '...' so no parameters. I don't want to see what the parser looks like.
The incredible type system (AKA Haskell and OCaml, praat is coming to you)
Simple natives types
windowname$ = left$(line$,length(line$)-4)
So, what's going on there?
It's now time to look at the convention and types of expression, so here we got :
left$ :: (String, Int) -> String
lenght :: (String) -> Int
windowname$ :: String
line$ :: String
As you can see, variable name and function names are suffixed with their type or return type. If their suffix is a '$', then it return a string or is a string. If there is nothing it's a number. I can see the point of prefixing the type to a variable to ease implementation, but to suffix, no sorry, I can't
Array type
To show the array type, let me introduce a 'tiny' loop :
for i from 1 to 4
Select... time time
bandwidth'i'$ = Get bandwidth... i
forhertz'i'$ = Get formant... i
endfor
We got i which is a number and... (no it's not a function)
bandwidth'i'$
What it does is create string variables : bandwidth1$, bandwidth2$, bandwidth3$, bandwidth4$ and give them values. As you can expect, you can't create two dimensional array this way, you must do something like that :
band2D__'i'__'j'$
The special string invocation
outline$ = "'time'#F'i':'forhertznum'Hz,'bandnum'Hz, 'spec''newline$'"
outline$ >> 'outfile$'
Strings are weirdly (at least) handled in the language. the '' is used to call the value of a variable inside the global "" string. This is _weird_. It goes against all the convention built into many languages from bash to PHP passing by the powershell. And look, it even got redirection. Don't be fooled, it doesn't work like in your beloved shell. No you have to get the variable value with the ''
Da Wonderderderfulful execution model
I'm going to put the final touch to this wonderderderfulful presentation by talking to you about the execution model. So as in every procedural languages you got instruction executed from top to bottom, there is the variables and the praat GUI. That is you code everything on the praat gui, you invoke commands written on menu/buttons.
The main window of praat contain a list of items which can be :
files
list of files (created by a function with a wonderderfulful long long name)
Spectrogramm
Strings (don't ask)
So if you want to perform operation on a given file, you must select the file in the list programmatically and then push the different buttons to take some actions. If you wanted to pass parameters to a GUI action, you have to follow the GUI layout of the form for your arguments, for example "To Spectrogram... 0.005 5000 0.002 20 Gaussian
" is like that because it follows this layout:
Needless to say, my nightmares are filled with praat scripts dancing around me and shouting "DEBUG MEEEE!!".
More information at the praat site, under the well-named section "easy programmable scripting language"
Well since this question refuses to die and since the OP did prod me into answering...
I humbly proffer for your consideration Authorware (AW) as the worst language it is possible to create. (n.b. I'm going off recollection here, it's been ~6 years since I used AW, which of course means there's a number of awful things I can't even remember)
the horror, the horror http://img.brothersoft.com/screenshots/softimage/a/adobe_authorware-67096-1.jpeg
Let's start with the fact that it's a Macromedia product (-10 points), a proprietary language (-50 more) primarily intended for creating e-learning software and moreover software that could be created by non-programmers and programmers alike implemented as an iconic language AND a text language (-100).
Now if that last statement didn't scare you then you haven't had to fix WYSIWYG generated code before (hello Dreamweaver and Frontpage devs!), but the salient point is that AW had a library of about 12 or so elements which could be dragged into a flow. Like "Page" elements, Animations, IFELSE, and GOTO (-100). Of course removing objects from the flow created any number of broken connections and artifacts which the IDE had variable levels of success coping with. Naturally the built in wizards (-10) were a major source of these.
Fortunately you could always step into a code view, and eventually you'd have to because with a limited set of iconic elements some things just weren't possible otherwise. The language itself was based on TUTOR (-50) - a candidate for worst language itself if only it had the ambition and scope to reach the depths AW would strive for - about which wikipedia says:
...the TUTOR language was not easy to
learn. In fact, it was even suggested
that several years of experience with
the language would be required before
programmers could build programs worth
keeping.
An excellent foundation then, which was built upon in the years before the rise of the internet with exactly nothing. Absolutely no form of data structure beyond an array (-100), certainly no sugar (real men don't use switch statements?) (-10), and a large splash of syntactic vinegar ("--" was the comment indicator so no decrement operator for you!) (-10). Language reference documentation was provided in paper or zip file formats (-100), but at least you had the support of the developer run usegroup and could quickly establish the solution to your problem was to use the DLL or SWF importing features of AW to enable you to do the actual coding in a real language.
AW was driven by a flow (with necessary PAUSE commands) and therefore has all the attendant problems of a linear rather than event based system (-50), and despite the outright marketing lies of the documentation it was not object oriented (-50) either. All code reuse was achieved through GOTO. No scope, lots of globals (-50).
It's not the language's fault directly, but obviously no source control integration was possible, and certainly no TDD, documentation generation or any other add-on tool you might like.
Of course Macromedia met the challenge of the internet head on with a stubborn refusal to engage for years, eventually producing the buggy, hard to use, security nightmare which is Shockwave (-100) to essentially serialise desktop versions of the software through a required plugin (-10). AS HTML rose so did AW stagnate, still persisting with it's shockwave delivery even in the face of IEEE SCORM javascript standards.
Ultimately after years of begging and promises Macromedia announced a radical new version of AW in development to address these issues, and a few years later offshored the development and then cancelled the project. Although of course Macromedia are still selling it (EVIL BONUS -500).
If anything else needs to be said, this is the language which allows spaces in variable names (-10000).
If you ever want to experience true pain, try reading somebody else's uncommented hungarian notation in a language which isn't case sensitive and allows variable name spaces.
Total Annakata Arbitrary Score (AAS): -11300
Adjusted for personal experience: OutOfRangeException
(apologies for length, but it was cathartic)
Seriously: Perl.
It's just a pain in the ass to code with for beginners and even for semi-professionals which work with perl on a daily basis. I can constantly see my colleagues struggle with the language, building the worst scripts, like 2000 lines with no regard of any well accepted coding standard. It's the worst mess i've ever seen in programming.
Now, you can always say, that those people are bad in coding (despite the fact that some of them have used perl for a lot of years, now), but the language just encourages all that freaking shit that makes me scream when i have to read a script by some other guy.
MS Access Visual Basic for Applications (VBA) was also pretty bad. Access was bad altogether in that it forced you down a weak paradigm and was deceptively simple to get started, but a nightmare to finish.
No answer about Cobol yet? :O
Old-skool BASICs with line numbers would be my choice. When you had no space between line numbers to add new lines, you had to run a renumber utility, which caused you to lose any mental anchors you had to what was where.
As a result, you ended up squeezing in too many statements on a single line (separated by colons), or you did a goto or gosub somewhere else to do the work you couldn't cram in.
MUMPS
I worked in it for a couple years, but have done a complete brain dump since then. All I can really remember was no documentation (at my location) and cryptic commands.
It was horrible. Horrible! HORRIBLE!!!
There are just two kinds of languages: the ones everybody complains about and the ones nobody uses.
Bjarne Stroustrup
I haven't yet worked with many languages and deal mostly with scripting languages; out of these VBScript is the one I like least. Although it has some handy features, some things really piss me off:
Object assignments are made using the Set keyword:
Set foo = Nothing
Omitting Set is one of the most common causes of run-time errors.
No such thing as structured exception handling. Error checking is like this:
On Error Resume Next
' Do something
If Err.Number <> 0
' Handle error
Err.Clear
End If
' And so on
Enclosing the procedure call parameters in parentheses requires using the Call keyword:
Call Foo (a, b)
Its English-like syntax is way too verbose. (I'm a fan of curly braces.)
Logical operators are long-circuit. If you need to test a compound condition where the subsequent condition relies on the success of the previous one, you need to put conditions into separate If statements.
Lack of parameterized class constructors.
To wrap a statement into several lines, you have to use an underscore:
str = "Hello, " & _
"world!"
Lack of multiline comments.
Edit: found this article: The Flangy Guide to Hating VBScript. The author sums up his complaints as "VBS isn't Python" :)
Objective-C.
The annotations are confusing, using brackets to call methods still does not compute in my brain, and what is worse is that all of the library functions from C are called using the standard operators in C, -> and ., and it seems like the only company that is driving this language is Apple.
I admit I have only used the language when programming for the iPhone (and looking into programming for OS X), but it feels as if C++ were merely forked, adding in annotations and forcing the implementation and the header files to be separate would make much more sense.
PROGRESS 4GL (apparently now known as "OpenEdge Advanced Business Language").
PROGRESS is both a language and a database system. The whole language is designed to make it easy to write crappy green-screen data-entry screens. (So start by imagining how well this translates to Windows.) Anything fancier than that, whether pretty screens, program logic, or batch processing... not so much.
I last used version 7, back in the late '90s, so it's vaguely possible that some of this is out-of-date, but I wouldn't bet on it.
It was originally designed for text-mode data-entry screens, so on Windows, all screen coordinates are in "character" units, which are some weird number of pixels wide and a different number of pixels high. But of course they default to a proportional font, so the number of "character units" doesn't correspond to the actual number of characters that will fit in a given space.
No classes or objects.
No language support for arrays or dynamic memory allocation. If you want something resembling an array, you create a temporary in-memory database table, define its schema, and then get a cursor on it. (I saw a bit of code from a later version, where they actually built and shipped a primitive object-oriented system on top of these in-memory tables. Scary.)
ISAM database access is built in. (But not SQL. Who needs it?) If you want to increment the Counter field in the current record in the State table, you just say State.Counter = State.Counter + 1. Which isn't so bad, except...
When you use a table directly in code, then behind the scenes, they create something resembling an invisible, magic local variable to hold the current cursor position in that table. They guess at which containing block this cursor will be scoped to. If you're not careful, your cursor will vanish when you exit a block, and reset itself later, with no warning. Or you'll start working with a table and find that you're not starting at the first record, because you're reusing the cursor from some other block (or even your own, because your scope was expanded when you didn't expect it).
Transactions operate on these wild-guess scopes. Are we having fun yet?
Everything can be abbreviated. For some of the offensively long keywords, this might not seem so bad at first. But if you have a variable named Index, you can refer to it as Index or as Ind or even as I. (Typos can have very interesting results.) And if you want to access a database field, not only can you abbreviate the field name, but you don't even have to qualify it with the table name; they'll guess the table too. For truly frightening results, combine this with:
Unless otherwise specified, they assume everything is a database access. If you access a variable you haven't declared yet (or, more likely, if you mistype the variable name), there's no compiler error: instead, it goes looking for a database field with that name... or a field that abbreviates to that name.
The guessing is the worst. Between the abbreviations and the field-by-default, you could get some nasty stuff if you weren't careful. (Forgot to declare I as a local variable before using it as a loop variable? No problem, we'll just randomly pick a table, grab its current record, and completely trash an arbitrarily-chosen field whose name starts with I!)
Then add in the fact that an accidental field-by-default access could change the scope it guessed for your tables, thus breaking some completely unrelated piece of code. Fun, yes?
They also have a reporting system built into the language, but I have apparently repressed all memories of it.
When I got another job working with Netscape LiveWire (an ill-fated attempt at server-side JavaScript) and classic ASP (VBScript), I was in heaven.
The worst language? BancStar, hands down.
3,000 predefined variables, all numbered, all global. No variable declaration, no initialization. Half of them, scattered over the range, reserved for system use, but you can use them at your peril. A hundred or so are automatically filled in as a result of various operations, and no list of which ones those are. They all fit in 38k bytes, and there is no protection whatsoever for buffer overflow. The system will cheerfully let users put 20 bytes in a ten byte field if you declared the length of an input field incorrectly. The effects are unpredictable, to say the least.
This is a language that will let you declare a calculated gosub or goto; due to its limitations, this is frequently necessary. Conditionals can be declared forward or reverse. Picture an "If" statement that terminates 20 lines before it begins.
The return stack is very shallow, (20 Gosubs or so) and since a user's press of any function key kicks off a different subroutine, you can overrun the stack easily. The designers thoughtfully included a "Clear Gosubs" command to nuke the stack completely in order to fix that problem and to make sure you would never know exactly what the program would do next.
There is much more. Tens of thousands of lines of this Lovecraftian horror.
The .bat files scripting language on DOS/Windows. God only knows how un-powerful is this one, specially if you compare it to the Unix shell languages (that aren't so powerful either, but way better nonetheless).
Just try to concatenate two strings or make a for loop. Nah.
VSE, The Visual Software Environment.
This is a language that a prof of mine (Dr. Henry Ledgard) tried to sell us on back in undergrad/grad school. (I don't feel bad about giving his name because, as far as I can tell, he's still a big proponent and would welcome the chance to convince some folks it's the best thing since sliced bread). When describing it to people, my best analogy is that it's sort of a bastard child of FORTRAN and COBOL, with some extra bad thrown in. From the only really accessible folder I've found with this material (there's lots more in there that I'm not going to link specifically here):
VSE Overview (pdf)
Chapter 3: The VSE Language (pdf) (Not really an overview of the language at all)
Appendix: On Strings and Characters (pdf)
The Software Survivors (pdf) (Fevered ramblings attempting to justify this turd)
VSE is built around what they call "The Separation Principle". The idea is that Data and Behavior must be completely segregated. Imagine C's requirement that all variables/data must be declared at the beginning of the function, except now move that declaration into a separate file that other functions can use as well. When other functions use it, they're using the same data, not a local copy of data with the same layout.
Why do things this way? We learn that from The Software Survivors that Variable Scope Rules Are Hard. I'd include a quote but, like most fools, it takes these guys forever to say anything. Search that PDF for "Quagmire Of Scope" and you'll discover some true enlightenment.
They go on to claim that this somehow makes it more suitable for multi-proc environments because it more closely models the underlying hardware implementation. Riiiight.
Another choice theme that comes up frequently:
INCREMENT DAY COUNT BY 7 (or DAY COUNT = DAY COUNT + 7)
DECREMENT TOTAL LOSS BY GROUND_LOSS
ADD 100.3 TO TOTAL LOSS(LINK_POINTER)
SET AIRCRAFT STATE TO ON_THE_GROUND
PERCENT BUSY = (TOTAL BUSY CALLS * 100)/TOTAL CALLS
Although not earthshaking, the style
of arithmetic reflects ordinary usage,
i.e., anyone can read and understand
it - without knowing a programming
language. In fact, VisiSoft arithmetic
is virtually identical to FORTRAN,
including embedded complex arithmetic.
This puts programmers concerned with
their professional status and
corresponding job security ill at
ease.
Ummm, not that concerned at all, really. One of the key selling points that Bill Cave uses to try to sell VSE is the democratization of programming so that business people don't need to indenture themselves to programmers who use crazy, arcane tools for the sole purpose of job security. He leverages this irrational fear to sell his tool. (And it works-- the federal gov't is his biggest customer). I counted 17 uses of the phrase "job security" in the document. Examples:
... and fit only for those desiring artificial job security.
More false job security?
Is job security dependent upon ensuring the other guy can't figure out what was done?
Is job security dependent upon complex code...?
One of the strongest forces affecting the acceptance of new technology is the perception of one's job security.
He uses this paranoia to drive wedge between the managers holding the purse strings and the technical people who have the knowledge to recognize VSE for the turd that it is. This is how he squeezes it into companies-- "Your technical people are only saying it sucks because they're afraid it will make them obsolete!"
A few additional choice quotes from the overview documentation:
Another consequence of this approach
is that data is mapped into memory
on a "What You See Is What You Get"
basis, and maintained throughout.
This allows users to move a complete
structure as a string of characters
into a template that descrives each
individual field. Multiple templates
can be redefined for a given storage
area. Unlike C and other languages,
substructures can be moved without the problems of misalignment due to
word boundary alignment standards.
Now, I don't know about you, but I know that a WYSIWYG approach to memory layout is at the top of my priority list when it comes to language choice! Basically, they ignore alignment issues because only old languages that were designed in the '60's and '70's care about word alignment. Or something like that. The reasoning is bogus. It made so little sense to me that I proceeded to forget it almost immediately.
There are no user-defined types in VSE. This is a far-reaching
decision that greatly simplifies the
language. The gain from a practical
point of view is also great. VSE
allows the designer and programmer to
organize a program along the same
lines as a physical system being
modeled. VSE allows structures to be
built in an easy-to-read, logical
attribute hierarchy.
Awesome! User-defined types are lame. Why would I want something like an InputMessage object when I can have:
LINKS_IN_USE INTEGER
INPUT_MESSAGE
1 ORIGIN INTEGER
1 DESTINATION INTEGER
1 MESSAGE
2 MESSAGE_HEADER CHAR 10
2 MESSAGE_BODY CHAR 24
2 MESSAGE_TRAILER CHAR 10
1 ARRIVAL_TIME INTEGER
1 DURATION INTEGER
1 TYPE CHAR 5
OUTPUT_MESSAGE CHARACTER 50
You might look at that and think, "Oh, that's pretty nicely formatted, if a bit old-school." Old-school is right. Whitespace is significant-- very significant. And redundant! The 1's must be in column 3. The 1 indicates that it's at the first level of the hierarchy. The Symbol name must be in column 5. You hierarchies are limited to a depth of 9.
Well, ok, but is that so awful? Just wait:
It is well known that for reading
text, use of conventional upper/lower
case is more readable. VSE uses all
upper case (except for comments). Why?
The literature in psychology is based
on prose. Programs, simply, are not
prose. Programs are more like math,
accounting, tables. Program fonts
(usually Courier) are almost
universally fixed-pitch, and for good
reason – vertical alignment among
related lines of code. Programs in
upper case are nicely readable, and,
after a time, much better in our
opinion
Nothing like enforcing your opinion at the language level! That's right, you cannot use any lower case in VSE unless it's in a comment. Just keep your CAPSLOCK on, it's gonna be stuck there for a while.
VSE subprocedures are called processes. This code sample contains three processes:
PROCESS_MUSIC
EXECUTE INITIALIZE_THE_SCENE
EXECUTE PROCESS_PANEL_WIDGET
INITIALIZE_THE_SCENE
SET TEST_BUTTON PANEL_BUTTON_STATUS TO ON
MOVE ' ' TO TEST_INPUT PANEL_INPUT_TEXT
DISPLAY PANEL PANEL_MUSIC
PROCESS_PANEL_WIDGET
ACCEPT PANEL PANEL_MUSIC
*** CHECK FOR BUTTON CLICK
IF RTG_PANEL_WIDGET_NAME IS EQUAL TO 'TEST_BUTTON'
MOVE 'I LIKE THE BEATLES!' TO TEST_INPUT PANEL_INPUT_TEXT.
DISPLAY PANEL PANEL_MUSIC
All caps as expected. After all, that's easier to read. Note the whitespace. It's significant again. All process names must start in column 0. The initial level of instructions must start on column 4. Deeper levels must be indented exactly 3 spaces. This isn't a big deal, though, because you aren't allowed to do things like nest conditionals. You want a nested conditional? Well just make another process and call it. And note the delicious COBOL-esque syntax!
You want loops? Easy:
EXECUTE NEXT_CALL
EXECUTE NEXT_CALL 5 TIMES
EXECUTE NEXT_CALL TOTAL CALL TIMES
EXECUTE NEXT_CALL UNTIL NO LINES ARE AVAILABLE
EXECUTE NEXT_CALL UNTIL CALLS_ANSWERED ARE EQUAL TO CALLS_WAITING
EXECUTE READ_MESSAGE UNTIL LEAD_CHARACTER IS A DELIMITER
Ugh.
Here is the contribution to my own question:
Origin LabTalk
My all-time favourite in this regard is Origin LabTalk.
In LabTalk the maximum length of a string variable identifier is one character.
That is, there are only 26 string variables at all. Even worse, some of them are used by Origin itself, and it is not clear which ones.
From the manual:
LabTalk uses the % notation to define
a string variable. A legal string
variable name must be a % character
followed by a single alphabetic
character (a letter from A to Z).
String variable names are
caseinsensitive. Of all the 26 string
variables that exist, Origin itself
uses 14.
Doors DXL
For me the second worst in my opinion is Doors DXL.
Programming languages can be divided into two groups:
Those with manual memory management (e.g. delete, free) and those with a garbage collector.
Some languages offer both, but DXL is probably the only language in the world that
supports neither. OK, to be honest this is only true for strings, but hey, strings aren't exactly
the most rarely used data type in requirements engineering software.
The consequence is that memory used by a string can never be reclaimed and
DOORS DXL leaks like sieve.
There are countless other quirks in DXL, just to name a few:
DXL function syntax
DXL arrays
Cold Fusion
I guess it's good for designers but as a programmer I always felt like one hand was tied behind my back.
The worst two languages I've worked with were APL, which is relatively well known for languages of its age, and TECO, the language in which the original Emacs was written. Both are notable for their terse, inscrutable syntax.
APL is an array processing language; it's extremely powerful, but nearly impossible to read, since every character is an operator, and many don't appear on standard keyboards.
TECO had a similar look, and for a similar reason. Most characters are operators, and this special purpose language was devoted to editing text files. It was a little better, since it used the standard character set. And it did have the ability to define functions, which was what gave life to emacs--people wrote macros, and only invoked those after a while. But figuring out what a program did or writing a new one was quite a challenge.
LOLCODE:
HAI
CAN HAS STDIO?
VISIBLE "HAI WORLD!"
KTHXBYE
Seriously, the worst programming language ever is that of Makefiles. Totally incomprehensible, tabs have a syntactic meaning and not even a debugger to find out what's going on.
I'm not sure if you meant to include scripting languages, but I've seen TCL (which is also annoying), but... the mIRC scripting language annoys me to no end.
Because of some oversight in the parsing, it's whitespace significant when it's not supposed to be. Conditional statements will sometimes be executed when they're supposed to be skipped because of this. Opening a block statement cannot be done on a separate line, etc.
Other than that it's just full of messy, inconsistent syntax that was probably designed that way to make very basic stuff easy, but at the same time makes anything a little more complex barely readable.
I lost most of my mIRC scripts, or I could have probably found some good examples of what a horrible mess it forces you to create :(
Regular expressions
It's a write only language, and it's hard to verify if it works correctly for the right inputs.
Visual Foxpro
I can't belive nobody has said this one:
LotusScript
I thinks is far worst than php at least.
Is not about the language itself which follows a syntax similar to Visual Basic, is the fact that it seem to have a lot of functions for extremely unuseful things that you will never (or one in a million times) use, but lack support for things you will use everyday.
I don't remember any concrete example but they were like:
"Ok, I have an event to check whether the mouse pointer is in the upper corner of the form and I don't have an double click event for the Form!!?? WTF??"
Twice I've had to work in 'languages' where you drag-n-dropped modules onto the page and linked them together with lines to show data flow. (One claimed to be a RDBMs, and the other a general purpose data acquisition and number crunching language.)
Just thinking of it makes me what to throttle someone. Or puke. Or both.
Worse, neither exposed a text language that you could hack directly.
I find myself avoid having to use VBScript/Visual Basic 6 the most.
I use primarily C++, javascript, Java for most tasks and dabble in ruby, scala, erlang, python, assembler, perl when the need arises.
I, like most other reasonably minded polyglots/programmers, strongly feel that you have to use the right tool for the job - this requires you to understand your domain and to understand your tools.
My issue with VBscript and VB6 is when I use them to script windows or office applications (the right domain for them) - i find myself struggling with the language (they fall short of being the right tool).
VBScript's lack of easy to use native data structures (such as associative containers/maps) and other quirks (such as set for assignment to objects) is a needless and frustrating annoyance, especially for a scripting language. Contrast it with Javascript (which i now use to program wscript/cscript windows and do activex automation scripts) which is far more expressive. While there are certain things that work better with vbscript (such as passing arrays back and forth from COM objects is slightly easier, although it is easier to pass event handlers into COM components with jscript), I am still surprised by the amount of coders that still use vbscript to script windows - I bet if they wrote the same program in both languages they would find that jscript works with you much more than vbscript, because of jscript's native hash data types and closures.
Vb6/VBA, though a little better than vbscript in general, still has many similar issues where (for their domain) they require much more boiler plate to do simple tasks than what I would like and have seen in other scripting languages.
In 25+ years of computer programming, by far the worst thing I've ever experienced was a derivative of MUMPS called Meditech Magic. It's much more evil than PHP could ever hope to be.
It doesn't even use '=' for assignment! 100^b assigns a value of 100 to b and is read as "100 goes to b". Basically, this language invented its own syntax from top to bottom. So no matter how many programming languages you know, Magic will be a complete mystery to you.
Here is 100 bottles of beer on the wall written in this abomination of a language:
BEERv1.1,
100^b,T("")^#,DO{b'<1 NN(b,"bottle"_IF{b=1 " ";"s "}_"of beer on the wall")^#,
N(b,"bottle"_IF{b=1 " ";"s "}_"of beer!")^#,
N("You take one down, pass it around,")^#,b-1^b,
N(b,"bottle"_IF{b=1 " ";"s "}_"of beer on the wall!")^#},
END;
TCL. It only compiles code right before it executes, so it's possible that if your code never went down branch A while testing, and one day, in the field it goes down branch A, it could have a SYNTAX ERROR!

Resources