Flexibility or Consistency, which is Correct for a good programming language? - programming-languages

Recently I'm pondering the characteristics of a good programming language. It's true that "There are a thousand Hamlets in a thousand people's eyes", like simplification, explicitness, readability and more.
But I'm wondering a good programming language should be consistent, which means everyone will write the same(at least very similar) codes to achieve the same functionality, which means everyone will read, understand, maintain others' codes easily and effectively. Not as flexible as everyone will write different codes in readability and quality for one functionality, which mostly depends on programmer's experience and skill.
Till now, I didn't find some languages which matches my definition of consistency, but I'm still looking for. Anyone knows any?
I'm not sure whether you agree with me or not? Please point out if anything wrong with my point. And leave your argument no matter you agree with me or not. Just leave your points and arguments.
Regards.

Both, or either. Take the difference between Perl and Python. In Perl, tell someone to read data from a file and display it to stdout and you can get roughly 85,000 different permutations. In Python, you're likely to have 5-10. However, both are widely used, extremely powerful, highly efficient and capable of nearly any task.
Language choice comes down to personal preference and a bit of what you want to do. OO paradigms will lead to drastically different program structures from program to program, imperative programming paradigms will lead to very similar program structures from programmer to programmer, functional programming paradigms will be somewhere in between.
As an aside, I personally prefer the language to be flexible enough to let me design the application in the manner I see fit as there are times when performance of certain components are an absolute necessity while others they don't matter at all. A strict "One True Way" can place limits on what you do, but "There's More Than One Way To Do It" requires you to know which is the highest performance and make the tradeoff for readability vs. compactness vs. performance (not always mutually exclusive).

There's this great cartoon with a tree swing in every panel. One panel has a regular swing and says "what the customer needed". The next panel shows two seats - one on top of the other and says "what the customer asked for". Next is a swing with three stacked seats that says "what the programmer's built". Everyone has a mental picture of their expectations. And everyone's picture is different.
You can't achieve consistency in that kind of environment. If we can't achieve consistency, can we improve upon readability? Because the end goal is having code that other programmers can maintain.
I like literate programming. It arranges code in a human context - what makes sense for a person to read. A literate style more closely follows your mental model.
Even when I can't use the tools, that style has vastly improved the clarity and effectiveness of comments.

The Zen of Python contains:
There should be one-- and preferably only one --obvious way to do it.

Related

Is similarity to "natural language" a convincing selling point for a programming language? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Look, for example at AppleScript (and there are plenty of others, some admittedly quite good) which advertise their use of the natural language metaphor. Code is apparently more readable because it can be/is intended to be constructed in English-like sentences, says they. I'm sure there are people who would like nothing better than to program using only English sentences. However, I have doubts about the viability of a language that takes that paradigm too far (excepting niche cases).
So, after a certain reasonable point, is natural-languaginess a benefit or a misfeature? What if the concept is carried to an extreme -- will code necessarily be more readable? Or might it be unnecessarily long, difficult to work with, and just as capable of producing hilarity on the scale of obfuscated Perl, obfuscated C, and eye-twisting Bash script logorrhea?
I am aware of some specialty cases like "Inform" that are almost pure English, but these have a niche that they're not likely to venture out from. I hear and read about how great it would be for code to read more like English sentences, but are there discussions of the possible disadvantages? If everyday language is so clear, simple, clean, lovely, concise, understandable, why did we invent mathematical notation in the first place?
Is it really easier to describe complex instructions accurately and precisely to a machine in natural language, or isn't something closer to mathematical markup a much better choice? Where should that line be drawn? And finally, are you attracted to languages that are touted as resembling English sentences? Should this whole question have just been a one liner:
naturalLanguage > computerishLanguage ? booAndHiss : cheerLoudly;
Well, of course, natural languages are rarely clear, simple, clean, lovely, concise, understandable which is one of the reasons that most programming is done in languages far from natural.
My answer to this would be that the ideal programming language lies somewhere between a natural language and a very formal language.
On the one extreme, there's the formal, minimal, mathematical languages. Take for example Brainfuck:
,>++++++[<-------->-],[<+>-]<. // according to Wikipedia, this means addition
Or, what's somewhat preferable to the above mess, any type of lambda calculus.
λfxy.x
λfxy.y
This is one possible way of expressing the Boolean truth values in lambda calculus. Doesn't look very neat, especially when you build logical operators (such as AND being e.g. λpq.pqp) around them.
I claim that most people could not write production code in such a minimalistic, hard-to-grasp language.
The problem on the other end of the spectrum, namely natural languages as they are spoken by humans, is that languages with too much complexity and flexibility allows the programmer to express vague and indefinite things that can mean nothing to today's computers. Let's take this sample program:
MAYBE IT WILL RAIN CATS AND DOGS LATER ON. WOULD YOU LIKE THIS, DEAR COMPUTER?
IF SO, PRINT "HELLO" ON THE SCREEN.
IF YOU HATE RAIN MORE THAN GEORGE DOES, PRINT SOME VAGUE GARBAGE INSTEAD.
(IN THE LATTER CASE, IT IS UP TO YOU WHERE YOU OUTPUT THAT GARBAGE.)
Now this is an obvious case of vagueness. But sometimes you would get things wrong with more reasonable natural language programs, such as:
READ AN INTEGER NUMBER FROM THE TERMINAL.
READ ANOTHER INTEGER NUMBER FROM THE TERMINAL.
IF IT IS LARGER THAN ZERO, PRINT AN ERROR.
Which number is IT referring to? And what kind of error should be printed (you forgot to specify it.) — You would have to be really careful to be extremely explicit about what you mean.
It's already too easy to mis-understand other humans. How do you expect a computer to do better?
Thus, a computer language's syntax and grammar has to be strict enough so that it doesn't allow ambiguity. A statement must evaluate in a deterministic way. (There are maybe corner cases; I'm talking about the general case here.)
I personally prefer languages with a very limited set of keywords. You can quickly learn such a language, and you don't have to choose between 10,000 ways of achieving one goal simply because there's 10,000 keywords for doing the same thing (as in: GO/WALK/RUN/TROD/SLEEPWALK/etc. TO THE FRIDGE AND GET ME A BEER!). It means if you need to think about 10,000 different ways of doing something, it won't be due to the language, but due to the fact that there are 9,999 stupid ways to do it, and 1 elegant solution that just shines more than all the others.
Note that I wrote all natural language examples in upper-case. That's because I sort of had good old GW-BASIC and COBOL in mind while I wrote this. There've been some examples of programming languages that lean on natural language, and I think history has shown that they are, in general, somewhat less widespread than e.g. terse C-style languages.
I recently read that according to Gartner there are over 400 billion lines of COBOL source code in active use worldwide today.
That doesn't prove anything other than that banks and governments are fond of their legacy code, but you could construe it as a testament to the success of English-like programming languages. I'm not aware of any other programming language that is so close to English and so verbose.
Aside from that, I tend to agree with the other respondents: Programmers prefer not to type so much, and in general a language based on mathematics-like shorthand is both more expressive and more precise than one based on English.
There's a point where terse, expressive code looks like line noise. Perl, APL and J come to mind as examples with "illegible one-liners." Programmers are humans, and it may be beneficial to leave them with some similarity to natural language to give their brains something familiar to hold on to. Thus, I propagate a happy medium that's reminiscent of but not too close to natural language.
"When a programming language is created that allows programmers to program in simple English, it will be discovered that programmers cannot speak English." ~ Unknown
In my (not so) humble opinion, no.
Natural language is full of ambiguities. Normally we do not think of them because humans can easily disambiguate them, based on many criteria often unavailable to the computer. First off we have knowledge about the world (elephants don't fit in pajamas), but also we use more senses than just hearing when we speak to each other, body language to name one. The intonation and manner things are said with also helps alot to disambiguate. It is harder to catch irony or sarcasm in written text, which is more or less a transcription of what we would say, more in the case IM less in the case of well written articles. In general there is loads and loads of ambiguity in natural language, for instance where the PPs, prepositional phrases attach:
"Workers [dumped [sacks [with flour]]]"
"Workers [dumped [sacks] [with a fork-lift]]]"
Any human immediatly tells where the PP will attach, its reasonable to have sacks with flour in them, and its reasonable to use a fork-lift to dump something. Another very troublesome area is the word "and" which messes up the grammar horrendously, or all the references we use, the pronouns in general, but also more complex references, ie. "Bill bought a Dodge Viper, sadly the car was a lemon".
So we have three options, keep the ambiguities in and try to deal with them, accepting very many errors in disambiguation and very very slow parsing, no LALR or LL will work here, or try to make an artifical grammar resembling natural language, and keeping it deterministic, which is more reasonable but still horrible. We now have a language that falsely resembles English, but it isn't which is confusing. We have none of the benefits of a proper syntax and none of the benefits of natural language, but an oversized overwordly monstrum, with a diffcult and unintuitive grammar, diffcult to learn and slow to write.
The third way is realizing we need a succinct way of expressing ourselves, which can also be processed by a computer, not resembling any natural language, but focusing on being an unambigous description of an algorithm. This will increase the readability, especially if we compare to a very precise natural language counter part. This is why many people prefer to also read the pseudo-code when dealing with difficult problems or advanced algorithms, it relieves us of the trouble with dealing with ambiguities, and is more optimal for expressing computer instructions.
The issue isn't so much that it's easier to describe complex ideas using one approach or the other, but it certainly is easier understanding machine languages (at least for machines). The biggest issue is, as always, ambiguity. Computers are terrible at understand it, so most grammars for programming languages need to be constructed to either remove all ambiguity, or the general language must be constructed so that ambiguity isn't actually a problem (this is tricky).
Any programming language that allows for ambiguity would be terribly error prone; and any natural language that doesn't allow ambiguity would be terribly verbose and convoluted (I'm looking at you, Lojban [ok, maybe Lojban isn't so bad‚ still…]).
The propensity some people show for preferring natural languages for programming languages might essentially root out in the desire to eventually be able to input a physics textbook into a parser, whereupon it'll do your homework when asked.
Of course, that's not to say that programming languages shouldn't have hints of natural language: Especially for OOP it makes good sense to have calling grammar resemble natural grammar, like in Obj-C, which is sort of a game of mad libs:
[pot makeCoffee:strong withSugar:NO];
Doing the same in BrainFuck would be, well, a brainfuck, three full pages of code to flip a switch will do that to you.
In essensce; the best languages are (probably) the ones that resemble natural languages, without pretending to be one. (Avoiding the uncanny valley of programming languages, [if there is such a thing] if you will. [Subclauses! Yay!])
A natural language is too ambiguous to be used as programming language. It has to be artificially constrained to eliminate ambiguities.
But it defeats the purpose of having a "natural" programming language, because you have its verbosity and none of its advantages in expressibility.
I think the fourth language I coded professionally in (after Fortran, Pascal and Cobol) was Natural. Which is a pretty obscure 4GL of 1980's vintage for developing mainframe systems against an ADABAS database.
Called Natural I believe because it had pretensions to be so. Supposedly management-readable like cobol, but minus the fluff.
Which should tell you that attempts at 'Natural' programming languages have a commercial history of over 30 years now (more if you count cobol) but they have pretty much lost out to languages that don't pretend to be 'natural' but do allow the programmer to define the problem succinctly. When I first started coding the 1GL -> 2GL -> 3GL evolution wasn't that old and the progression to 4GL (defined then as a more english-like programming languages) for mainstream work seem an obvious next step. It hasn't worked out that way. If anything getting up to speed with coding now has got harder because there's more abstract concepts to learn.
SQL was designed with natural language in mind originally. Fortunately it hasn't held on too tightly to this and advances since its conception are less "naturalistic".
But anybody that has tried to write a complicated query in SQL will tell you that its not that easy. You have worry about the range of some keywords over your query. You have this incredibly hard to understand query, that does some crazy shit, but you re-write it every time you need to change something because its easier.
Natural language programming is a bad idea. the further you get from assembly, the more mistakes you can make, not in terms of logical errors or anything like that, but in terms of having the wrong assumption about how the script interpreter/bytecode intepreter/compiler makes your code run on the CPU.
Is seems to be a great feature for beginners, or people who program as a "secondary activity". But I doubt you could reach the complexity and polyvalence of actual programming languages with natural language.
If there was a programming language that actually adhered to all of the conventions of the natural language it mimics, then that would be fantastic.
In reality, however, a lot of so-called "natural" programming languages have far stricter syntax than English, which means that although they are easily readable, it is debatable whether they are actually all that easy to write.
What makes sense in English is often a syntax error in AppleScript.
Everyday language isn't so clear, simple, clean, lovely, concise and understandable - to a computer. However, to a human, readability counts for a lot, and the closer you get to a natural language, the easier it is to read. That's why we're not all using assembly language.
If you have a completely natural language, there are a lot of things that need to be handled - the sentence needs to be parsed, each word must be understood - and there is plenty of room for ambiguity. That's generally not a good thing for a programming language, because then we're venturing into psychic programming - the computer has to figure out what you were thinking, which is not at all easy to get.
However, if you can make something sufficiently close to natural language - and yes, Inform 7 is probably the best example - so sentences look natural, but still have some structure you need to follow - then the code is almost instantly readable, even to people that don't know the language. There's usually also less specialized syntax to remember - because you're really just talking (a slightly modified form of) English - but if you have to do something out of the ordinary, then you might have to jump through some hoops to do that.
In practice, most languages don't bother with this, because that makes it easier for them to allow you to be precise. However, some will still hover closer to the "natural language". This can be a good thing: if you have to translate some pseudocode algorithm to a language, you don't need to manipulate it as much to make it work, reducing the risk that you make an error in the translation.
As an example, let's compare C and Pascal. This Pascal code:
for i := 1 to 10 do begin
j := j + 1;
end;
is equivalent to this C code:
for (i = 1; i <= 10; i++) {
j = j + 1;
}
If you had no prior knowledge of either syntax, the Pascal version is generally going to be simpler to read, if only because it's not as complex as a C for.
Let's also consider operators. Pascal and C both share +, - and *. They also both have /, but with different semantics: In C, / does an integer division if both operands are integers; in Pascal, it always does a "real" division and uses div for integer division. That means that you have to take the types into account when figuring out what actually happens in that line of code.
C also has a bunch of other operators: &&, ||, &, |, ^, <<, >> - in Pascal, those operators are instead named and, or, and, or, xor, shl, shr. Instead of relying on some semi-arbitrary sequence of characters, it's spelled out more. It's instantly obvious that xor is - well, XOR - unlike the C version, where there's no obvious correlation between ^ and XOR.
Of course, this is to some degree a matter of opinion: I much prefer a Pascal-like syntax to a C-like syntax, because I think it's more readable, but that doesn't mean everyone else does: A more natural language is usually going to be more verbose, and some people simply dislike that extra level of verbosity.
Basically, it's a matter of choosing what makes the most sense for the problem domain: if the problem domain is very limited (like with Inform), then a natural language makes perfect sense. If it's a very generic domain (like with C), then you either need far more advanced processing than we are currently capable of, or a lot of verbosity to fill in the details - and in that case, you have to choose a balance depending on what sort of users will be using the languages (for regular people, you need more naturalness, for people who know programming, they're usually comfortable enough with less natural languages and will prefer something closer to that end).
I think the question is, who reads and who writes the application code in question? I think, regardless of the language or architecture, a trained software developer should be writing the code, and analyze the code as bugs arise.

What does "powerful" mean, when discussing programming languages?

In the context of programming language discussion/comparison, what does the term "power" mean?
Does it have a well defined meaning? Even a poorly defined meaning?
Say if someone says "language X is more powerful than language Y" or asks the same as a question, what do they mean - or what information are they trying to find out?
It does not have a well-defined meaning. In these types of discussions, "language X is more powerful than language Y" usually means little more than "I like language X more than language Y." On the other end of the spectrum, you'll also usually have someone chime in about how any Turing-complete language can accomplish the same tasks as any other Turing-complete language, so that neither is strictly more powerful than the other.
I think a good meaning for it is expressivity. When a language is highly expressive, it means less code is required to express concepts. To me, this doesn't just mean that you have to write less code to accomplish the same tasks, but also that the code is easily readable by humans. Of course, generally (to a point), having fewer lines of code to read makes the task of reading and understanding easier for humans.
Having a "powerful" standard library comes into play here along the same lines. If a language comes equipped with thorough, complete libraries, then idiomatic code in that language will be able to benefit from the existing library code and not have to repeat or reinvent common functionality in application code. The end result is, again, having to write and read less code to accomplish the same tasks.
I keep saying "generally" and "to a point", because once a language gets too terse, it gets more difficult for humans to decipher. I suppose at this extreme, a language may still be considered "more powerful" (or even "too powerful"). So I guess I'm saying my personal interpretation of "powerful" includes some aspects of "useful" and "readable" in it as well.
C is powerful, because it is low level and gives you access to hardware. Python is powerful because you can prototype quickly. Lisp is powerful because its REPL gives you fantastic debugging opportunities. SQL is powerful because you say what you want and the DMBS will figure out the best way to do it for you. Haskell is powerful because each function can be tested in isolation. C++ is powerful because it has ten times the number of syntactic constructs that any one person ever needs or uses. APL is powerful since it can squeeze a ten-screen program into ten characters. Hell, COBOL is powerful because... why else would all the banks be using it? :)
"Powerful" has no real technical meaning, but lots of people have made proposals.
A couple of the more interesting ones:
Paul Graham wants to call a language "more powerful" if you can write the same programs in fewer lines of code (or some other sane, sensible measure of program size).
Matthias Felleisen has written a very serious theoretical study called On the Expressive Power of Programming Language.
As someone who knows and uses many programming languages, I believe that there are real differences between languages, and that "power" can be a convenient shorthand to describe ways in which one language might be better than another. Nevertheless, whenever I hear a discussion or claim that one language is more powerful than another, I tend to keep one hand firmly on my wallet.
The only meaningful way to describe "power" in a programming language is "can do what I require with the least amount of resources" where "resources" is defined as "whatever costs I'd rather not pay" and could, thus, be development time, CPU time, memory space, money, etc.
So basically the definition of "power" is purely subjective and rendered meaningless in any objective discussion.
Powerful means "high in power". "Power" is something that increases your ability to do things. "Things" vary in shape, size and other things. Loosely speaking therefore, "powerful" when applied to a programming language means that it helps you to do perform your tasks quickly and efficiently.
This makes "powerful" somewhat well defined but not constant across domains. A language powerful in one domain might be crippling in another eg. C is very powerful if you want to do systems level programming since it gives you direct access to the machine and hardware and structures that let you code much faster than you would in assembly. C compilers also produce tight code that runs fast. However, once you move to web applications, C can become very "unpowerful" and crippling since it's so much effort to get something up and running and you have to worry about a lot of extraneous details like memory etc.
Sometimes, languages are "powerful" in multiple domains. This gives them a general "powerful" tag (or badge since were are on SO here). PG's claim is that with LISP, this is the case. That might be true or might not be.
At the end of the day, "powerful" is a loaded word so you should evaluate who is saying it, why he's saying it and what it means to to your work.
There are really only two meanings people are worried about:
"Powerful" in the sense of "takes less resources (time, money, programmers, LOC, etc.) to achieve the same/better result", and "powerful" in the sense of "is capable of doing a wide range of tasks".
Some languages are extrememly resource-effective for a small range of tasks. Others are not so resource-effective but can be applied to a wide range of tasks (e.g. C, which is often used in OS development, creation of compilers and runtime libraries, and work with microcontrollers).
Which of these two meanings someone has in mind when they use the term "powerful" depends on the context (and even then is not always clear). Indeed often it is a bit of both.
Typically there are two distinct meanings:
Expressive, meaning the code tends to be very short and understandable
Low level, meaning you have very fine-grained control over the hardware.
For the most languages, these two definitions are at opposite ends of the spectrum: Python is very expressive but not very low level; C is very low level but not very expressive. Depending on which definition you pick, either language is powerful or not powerful.
nothing absolutely nothing.
To high level programmers it might mean alot of available datatypes built in. Or maybe abstractions to easily create or follow Design Patterns.
Paul Graham is a very high level guy here is what he has to say:
http://www.paulgraham.com/avg.html
Java guys might tell you something about portability, the power to reach every platform.
C/UNIX programmers may tell you that its speed and efficiency, complete control over every inch of memory.
VHDL/Verilog programmers will tell you its complete control over every clock and gate so as to not waste any electricity or time.
But in my opinion a "powerful language" supports all of the features for you to complete your task. Documentation may be important, or perhaps it is portability, or the ability to do graphics. It could be anything, writing a gui from Assembly is just stupid, so is trying to design an embedded processor in flash.
Choosing a language that suits your needs perfectly will always feel like power.
I view the term as marketing fluff, no one well-defined meaning.
If you consider, say, Assembler, C, and C++. On occasions one drops from C++ "down" to C for particualr needs, and in turn from C down to assembler. So that make assembler the most powerful because it's the only language that can do everything. Or, to argue the other way, a single line of C++ code can replace several of C (hiding polymorphic dispatch via function pointers for example) and a single line of C replaces many of assembler. So C++ is more powerful because one line does "more".
I think the term had some currency when products such as early databases and spreadsheets had in-built languages, some quite restricted. So vendors would tout their language as being "powerful" because it was less restricted.
It can have several meanings. In the very basic sense there's power as far as what is computable. In that sense the most powerful languages are Turing Complete which includes pretty much every general purpose programming language (as opposed to most markup languages and domain specific languages which are often not Turing complete).
In a more pragmatic sense it often refers to how concisely (and readably) you can do certain things. Basically how easy is it to do certain tasks in one language compared to another.
What language is more powerful (besides being somewhat subjective) depends heavily on what you're trying to do. If your requirements are to get something running on a small device with 64k of memory you're likely not going to be using Java. Most likely the right language would be C or C++ (or if you're really hard core assembly). If you need a very simple CRUD app done in 1 day, maybe something like Ruby On Rails would be the way to go (I know Rails is a framework and Ruby is the language, but these days what libraries and frameworks are available factor greatly into picking a language)
I think that, perhaps coincidentally, the physics definition of power is relevant here: "The rate at which work is performed."
Of course, a toaster does not perform very quickly the work of putting out fires. Similarly, the power of a programming language is not universal, but specific to the domain or task to which it is being applied. C is a powerful language for writing device drivers or implementations of higher-level languages; Python is a powerful language for writing general-purpose applications; XPath is a powerful language for writing queries on structured data sets.
So given a problem domain, the power of a language can be said to be the rate at which a competent programmer is able to use it to solve problems in that domain.
A precise answer can be tried to reach, by not assuming that the elements that define "powerful" (in the context of languages) come from so many dimensions.
See how many could be, and a lot will be missing:
runtime speed
code size
expressiveness
supported paradigms
development / debugging time
domain specialization
standard libs
codebase
toolchain ecosystem
portability
community
support / documentation
popularity
(add more here)
These and more parameters draw together X picture of how "programming in some language" would be like at X level. That will be only the definition, though, the only real knowledge comes with the actual practice of using the language, but i digress.
The question comes down to which parameter will represent the intrinsic quality of a language. If you refer to a language in itself, its ultimate, intrinsic purpose is "express things", and thus the most representative parameter is rightfully expressiveness, and is also one that resonates frequently when someone talks about how powerful a language is.
At the moment you try to widen the question/answer to cover more than the expressiveness of the language "as a language, as a tongue", you are more talking about different kinds of "environment", social environment, development environment, commercial environment, etc.
Depending of the complexity of the environment to be defined you'll have to mix more parameters that come from multiple, vast, overlapping and sometimes contradictory dimensions, and eventually the point of getting the definition will be lost or the question will have to be narrowed.
This approximation still won't answer "what is an expressive language", but, again, a common understanding are the definitions that Vineet well points out in its answer, and Forest remarks in the comments. I agree, for me "expression" is "conveying meaning".
I remember many instructors in college calling whatever language they were teaching "powerful".
Leads me to think:
Powerful = a relative term comparing the latest way to code something vs. the original or previous way.
I find it useless to use the word "powerful" in regards to discussing anything software related. Every time my professor in college would introduce a new concept such as polymorphism he would say "so this is a really powerful feature". After a while I got annoyed. If everything is powerful then nothing is. It's all the same. You can write code to do anything. Does is really matter how much code is required to do it? You can say it's short or efficient but powerful is just useless. Nuclear energy is powerful. Code is words.
I think that power would normally refer to how quickly it can process data, for example I found that in python as soon as a list exceeds a length of approx. 2000 it becomes unbearably slow whereas in C++ a list can easily contain 20,000 entries without doing so.

Roadmap to a better programmer [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Its always said that more you program, the better you become. Sounds good and true.
But I was wondering if there is a proven route to becoming a better programmer.
Something like:
Learn a
Learn b
Learn c > 'Now you are good to burn the engines'
Try stuff around based on your learning.
The answer might be similar to a CS course roadmap, but I want to hear from successful programmers who might want to pitch in with something notable.
Thanks
It's not true that practice makes perfect.
It's perfect practice that makes perfect.
If all you do is keep repeating the same bad practices again and again, you'll only make it possible to create bad code faster.
By all means keep coding. But at the same time be critical of everything you do. Always have a jaundiced eye that looks for ways to do things better. Read widely to get new ideas. Talk to others about how they do things. Look at other people's code, good and bad.
There's no "sure" way to learn anything that I know of. If there was, anyone could master this.
All questions are rhetorical and meant to stimulate thought.
Technical parts:
Design Patterns - There are probably some specific to a domain but generally these are useful ways of starting parts of an application. Do you know MVC or MVP?
Basic algorithm starting points - Divide and conquer, dynamic programming, recursion, creating special data types like a heap, being greedy, etc.
Problem solving skills - How easily can you jump in and find where a bug is? Can you think of multiple solutions to the problem?
Abstract modelling - How well can you picture things in your head in terms of code or classes when someone is describing a problem?
High level versus low level - How well do you understand when one wants something high or low? This is just something I'd toss out there as these terms get through around a lot, like a high level view of something or a low level language.
Process parts:
Agile - Do you know Scrum, XP, and other new approaches to managing software projects? How about principles like YAGNI, DRY and KISS? Or principles like SOLID? Ideas like Broken Windows?
Developer Environment - How well do you know the IDE you use? Source Control? Continuous Integration? Do you know the bottle necks on your machine in terms of being productive?
xDD - Do you know of TDD, BDD, and other developments driven from a paradigm?
Refactoring - Do you go back over your old code and make it better or do you tend to write once and then abandon your code?
Soft skills:
Emotional Intelligence - Can be useful for presentations and working with others mostly.
Passions/Motivation - Do you know what gets your juices flowing and just kick butt in terms of being productive? Do you know what you would like to do for many many years?
My main piece of advice would be: don't be afraid to rewrite your own code. Look at stuff you wrote even a month ago and you will see flaws and want to rewrite stuff.
Make sure that you understand some fundamentals: collections, equality, hashcodes etc. These are useful across pretty much all modern languages.
Depending on the language you use - use lint and metric tools and run them over your code. Not all their suggestions will be applicable but learning which are important and which are not is important. E.g FindBugs, PMD etc for Java.
Above all refine and keep refining your work. Don't treat your work as abandonware!
Learn your 1st programming language a new programming paradigm or a
find a mentor you can learn from
Apply what you've learnt in a real world project
Learn from your mistakes and successes and goto step one
The trick is knowing what to learn first:
Programming languages - this is the place to start bcause you cannot write software without knowing at least one of these. After you've mastered one language try learning another.
Programming paradigm - i.e. object oriented, dynamic/functional programming etc. Try to learn a new one with each new language.
Design concepts - S.O.L.I.D, design patterns as well as architectural concepts.
People skills - learn to communicate your ideas.
Team leadership - learn how to sweep others and how to become a team or technological lead.
After that the sky is the limit.
I would look at improving roughly in this order, in iterations with each building on the previous one:
Programming concepts. Understand things like memory management, pointers, stacks, variable scope, etc.
Languages. Work on mastering several modern languages.
Design concepts. Learn about design patterns. Practice using them.
Communication. Often-overlooked. You can only become a highly valued Software Engineer if you can communicate effectively with non-tech people. Learn to listen and understand the needs that people are expressing, translate that into a set of requirements and a technical design, but then explain what you understood (and designed) back to them, in terms they can understand, for validation before you code. This is not an easy one to master, but it is essential.
Architectural concepts. Learn to understand the big picture of large, complex systems.
Learning a programming language is in many ways similar to learning a spoken language. The only way to get good at it is to do it as often as possible. In other works
Practice, practice, read and then practice more
Take time to learn about all sorts of coding techniques, tools and programming wisdom. This I have found to be crucial to my development. It's to easy to just code away and feel productive. What about what could be if you just had some more knowledge / weaponry under your belt to bang out that next widget.
Knowledge/know how is our real currency. The more we know the more we can make a better decision about how something should be done and do it faster.
For example, learn about:
•Development Practices, Software Design, Estimation, Methodologies Business Analysis Database Design (there are a lot of great books out there and online resources)
•Read Code - Open Source Projects are a good place for this. Read
Programming blogs
•Try to participate on Open Source
Projects.
•Look for programming user groups in
your town and/or someone who can mentor you.
And yes, as mentioned practice. Don't just read, do and watch how you will improve. :)
Practice, practice, practice.
Once you're over the basic hump of being able to program, you can also read useful books (i.e. Code Complete, Effective Java or equivalents, etc.) for ideas on how to improve your code.
First and foremost write code. Write as much as you can. Tackle hard problems. If you want to be a really good programmer you need to get into the guts of what you are doing. Spend a lot of time in debuggers looking at how things work. If you want to be a good programmer who really understands what is going on you need to get down to the metal and write highly async code, learn about how processors work and why SSE is so awesome. Understand threading primitives and be able to write them as well as describe what is actually happening in the processor. I could keep going here but you get the idea.
Second find someone who knows a lot more than you and learn. This relationship will work better if you are already deeply immersed in writing lots of code.
Third, spend some time in a large high quality open source code base. I learned a ton from the Quake I and Quake II code. Helped me be a better programmer.
Fourth take on hard problems. Push your limits. Build things that you thought were impossible. Right now I am writing a specialized compiler. I have learned so much just working on this for the last couple of months.
Sure, strictly speaking, the more you practice programming, the better you become at solving those sorts of problems. But is that what you really want?
Programming is a human activity more than a technological one, at its heart. It's easy to improve your computer skills, not so hard to improve your interpersonal skills.
Read "Journey of the Software Professional" by Hohmann. One of the concepts the concepts Hohmann describes is the "cognitive library," which includes both programming skills and non-programming skills. Expand your cognitive library, and your programming skill will improve too.
Read a lot of non-programming books too, and observe the world around you. Creating useful metaphors is an essential skill for the successful programmer. Why do restaurants do things how they do? What trade-offs is the garbage department making when they pick up the garbage every few days instead of every day? How does scaling affect how a grocery store does business? Be an inquisitive human to be a better programmer.
For me, there has to be a reason to learn something new... that is, unless I have a project in mind or some problem I need to solve, there's no hope. If that prerequisite is met, then I usually try to get "Hello, world" working, and after that the sky's the limit. So much of development these days is just learning new APIs. Occasionally there's some kind of paradigm shift that blows your mind, but that's not as common as people like to think, IMHO.
Find a program that intrigues you, one that solves a problem, or one that would simplify many of your tasks. Try to write something similar. You'll get up to speed very quickly and have fun doing it at the same time.
You can try learning one thing really well and then expanding out to programming areas that are associated with the things that you have learnt, so that you can offer complete solutions to customers.
At the same time, devote part of your time to explore things outside your comfort zone.
One you have learned something, try to learn something a little harder. Read and practice a lot about things that seem confusing at first time (lambda functins, threading, array manipulation, etc). It will take its time, but once you have practiced enough, what seemed confusing at first, will be familiar and easy.
In addition to the rest of the great advice already given here, don't be afraid to read about coding and good practice, but also take everything with a grain of salt and see what works best for you. A lot of advice is opinion.
Good sites to read:
-thedailywtf.com
-joelonsoftware.com
-codinghorror.com
-blogs.msdn.com/oldnewthing
A great place to get practice is programming competition websites. Those will help you learn how to write good algorithms, not necessarily maintainable code, but they're still a good place to start for learning.
The one I used to use (back when I had time) was:
http://uva.onlinejudge.org/
Learn more than one language. One at a time, definitely, but ultimately you should be fluent in a couple. This will give you a better perspective I think, and help you to become an expert at programming, rather than being an expert at a certain language.
Learn the ins and outs of computers at all levels, hardware, os, etc. Ideally you should be able to build your own system, install multiple operating systems on it, and diagnose just about every problem that can arise. I know many programmers who are not "computer tech people" and their failure to understand what is happening at every level becomes a major hindrance in diagnosing and fixing unusual bugs or performance issues.
As well as looking at 'last weeks code', talk to users of your work after delivery - be one yourself if possible.
Its not my bag, but some of the best coders I know have spent time supporting applications. The experience improved their product I'm sure.
eat breath dream the programming language your using (no seriously, it helps)
There are two kinds of learning -
1. Informal (like how you learned how to function in society- through interaction with peers and family)
2. Formal (like your high school training- through planned instruction)
If you want an entry-level programming job, formal training via an undergrad Computer Science/Engineering degree is the way to go. However, if you want to become a rock-star developer, it is best done by informal training- make unintentional mistakes and have senior developers curse at you, learn a design pattern because an app you are updating uses it, almost cry because a bad developer wrote a huge messy program lacking documentation and best practices and now you have to do several updates to it ASAP; thing of these nature.
It is hard for anyone to give you a list of all you need to know. It varies per area (e.g. a web developer vs. to a desktop developer) and it varies per company (e.g. Microsoft that sells software vs. General Motors that mainly just use it in their cars.) Informal traiing and being engaged in trying to learn to do your job better and get promoted is your best bet in my opinion.
To prove my point, everyone here has great answers but they all differ. Ask a rock-star developer how he learned something or when, why; they may not know- things just happen.
Practice, individually and collectively
Keep an open mind, always learn new things, don't limit yourself to what's familiar. Not solely from a tech perspective, ui design, people skills, ... Don't be afraid of what's new
Peer review, talk to people about your code, let people talk to you about their code, everyone has a unique way of looking at a problem and you will learn a great deal from peers
Love coding. If you love what you're doing, putting in alot of time seems effortless. Every coder needs the drive!
One small addition to these good answers. When I work on someone else's code, usually I pick up something new. If you have the opportunity to work with someone else that is of equal or greater skill, noticing their programming style can teach you tons.
For example, in C++ & Javascript I no longer use if() statements without braces. The reason is that it's just too easy to mistakenly put:
while (true) {
if (a > b)
print a
print b
}
This is an obvious typo, but very easy to introduce, especially if you're editing existing code. I just call it defensive programming in my mind, but little tricks like this are valuable at making you better.
So, find a peer or mentor, and work on their code.
I am not sure if the OP was looking for general advice on how to be a good programmer, but rather something more specific.
I know I am reviving this thread, but I found it because I was trying to see if anyone asked this question already.
What I had in mind was, can we come up with a "knowledge-map" of programming concepts similar to the map that Khan Academy uses.
As a programmer, I want to be able to visualize the dependencies and relationships between different ideas, so that I can understand what skill level I am currently at; what I need to know before tackling a challenging subject; and be able to visualize my progress.
The very belief in the roadmap's existence blocks the road to perfection.

Should programming languages be intuitive?

What features could be added to a new programming language
to make it more "intuitive"? When it comes to websites and
desktops, we favor high usability, almost intuitive
usability. It is becoming increasingly expected that your
application should "just work". For a certain class of
applications the idea that one has to RTFM, is a mark
against the effectiveness of the application. People tend to
expect the application to just work the way they "think" it
should work. One could argue that this is a worthy standard
that designers should strive for.
Can the same usability rigor apply to programming languages
and developer environments? I realize there are tools like
IntelliSense that provide hints, and a good IDE provides a
lot of assist. But what about the core language itself? What
could be added (or removed) that makes certain programming
techniques or algorithms more obvious to implement? How does
one make regular expressions or recursion more intiutive? Or
is this just folly?
Take a more concrete example: liquid layouts in HTML, CSS,
or Flex and MXML. In HTML and CSS, the box model is anything
but intuitive given the different implementations of
Internet Explorer and the other browsers. And unless someone
reads the documentation or studies the concept of the box
model it would be difficult to "just get it" when designing
a layout on one's first stab at CSS. I would argue this is
why tables thrived in the early days. The box model was
implicit in the concept of a table cell. With the help of
tools like Dreamweaver one could get their mind around
percentage widths and layout within the constraints of table
cells. Then CSS came into maturity and a whole set of valid
reasons emerged for why tables are not for layout. But to
achieve the same effects designers had to really study the
CSS implementations and the box model, and inject a new
layer of abstraction into their thinking.
In another example, I find when programming lots of things
in ActionScript and MXML, the whole concept of fluid layouts
and percentage based widths of elements not very obvious and
doesn't always follow intuition. I understand the basic
problem in that the Adobe Flash player and the layout need
to understand things in absolute pixel terms. When it comes
to the potential width of a component, I understand why
percentages are not immediately obvious to implement at the
core level of the code. Theoretically speaking the Flash
Player needs to know (or calculate) the exact width of a
component so that it can provide the proper geometry to the
video card when doing a draw on the screen. But when you
introduce some concept of percentages then you introduce the
theoretical possibility of an infinite width. And to find
"infinity - 1" pixels is not something a computer can
directly do without some layer of abstraction and
calculation. The viewport must be referenced. The program
must know its boundaries. So absolute widths are the norm,
although humans might prefer to design in terms of
percentages.
When it comes to programming languages can there be
expressions and features that assist intuition when thinking
about a programming task. Or are we better off "thinking
like a computer" and just RTFM'ing the manual when we need
to understand how to implement some feature or layout in
code?
If you could change the syntax or semantics of your
programming language of choice what would you add, change,
or remove to improve the "intuitiveness" of it?
Addendum, the reason for asking this question is inspired by
seeing example of what "novices" were able to achieve in
Smalltalk in Alan Kay's lecture: Doing with Images Makes
Symbols.
"If you could change the syntax or semantics of your programming language of choice what would you add, change, or remove to improve the "intuitiveness" of it?
"
Programming is hard. Really hard. Syntax changes don't matter much. IDE's are irrelevant to the fundamental challenge of programming.
The thing that is often baffling is the semantics of the language.
I don't know what "intuitive" means with respect to a thing as abstract as a programming language. Indeed, "intuition" is probably a bad thing. Coming to a programming language with intuition means preconceived notions, biases and intellectual junk will take over.
I would never expect to "just get it" for anything on any level anywhere. Programming requires clear thinking -- not "intuition" -- not "expectation".
The only thing we can ever do is read the manual and understand the unique, distinct, novel semantics of the new thing we're confronted with.
I do know this: elegant simplicity is essential. Orthogonality of features. Clarity. Precision. Absence of exceptions or special cases. Above all, simplicity.
Layering on language features is fundamentally bad.
Covering language problems by layering in a complex IDE is worse.
See http://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD854.html
"when faced with something new and unfamiliar we try to relate it to what we are familiar with. In the course of the process we invent the analogies that enable us to do so.
It is clear that the above way of trying to understand does not work too well when we are faced with something so radically new, so without precedent, that all analogies we can come up with are too weak and too shallow to be of great help. A radically new technology can create such circumstances and the wide-spread misunderstanding about programming strongly suggests that this has happened with the advent of the automatic computer. "
In short, "intuition" and "intellectual baggage" is the problem of the programmer. The best way to understand a technology is to approach it as something fresh, new and otherwise unknown.
Bottom Line.
The complexity is inherent.
You have two choices.
Develop intellectual tools (i.e., abstraction, summarization, etc.) to cope with it.
Get a job in another field.
Asking for the inherently complicated world of computing to morph into something any one person finds "intuitive" can't happen. Computing is too complicated to be "intuitive".
Another field I've seen that addresses the complexity of the "syntax" of a programming languages is that of Visual Programming Languages. The basic idea behind VPLs is to take the constructs of programming languages (decisions, subroutines, functions, etc.) and represent them graphically, typically as a data-flow diagram. One such language that's gaining popularity recently is the Microsoft Visual Programming Language. I have not used it, and cannot make claims as to its power, but I have used LabView to great effect and I can say that you can do pretty much anything you can think of even in LabView -- but you do have to think of it in a very different way.
That said, I find I have a personal preference for code rather than VPLs.
One step folks are taking that has as much to do with base class library as it does the language itself -- although to be honest, the two are often synonymous -- is the concept of a Fluent API. The basic idea is to make code "read like a sentence", the idea being that this makes the code more flexible and maintainable.

Why create a new programming language? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the real benefit of creating a new programming language? It is highly unlikely that you are going to actually use it.
In short, how will the process of creating a new language make you a better programmer?
You will understand the decisions behind language design and garner a better overall understanding of the compromises made between readability, performance, and reliability.
Your familiarity with concepts such as recursion, closures, garbage collection, reference management, typing, data structures and how these things actually work will increase. Most programmers will utilize resources and language features better.
Similar to the way we learn new ways to code solutions when we use other languages, when we write our own languages, we explore new ways to create solutions. See Metaprogramming. Contrary to the what the question suggests, Domain Specific Languages are used in many environments.
If you're writing a compiler, you'll learn more about how computers work than you ever did before. (Depending on your goal, perhaps more than you intended to learn)
When I wrote my own sort routines in school, even re-implementations of good ones, it really drove home some of the weaknesses of some of the algorithms.
In short, there's an order of magnitude of difference in a programmer who knows how to use tools, and a programmer who knows how to make tools.
I can speak from experience here ...
Fun, Domain specific problem solving, Complexity in context
I love creating new languages for fun, and for tackling domain specific problems. A very simple example might be Wikipedia markup or something as complex as Erlang which specializes in concurrent processing.
Many general purpose languages are similar, because they are general purpose. Sometimes you need a more accurate abstraction of the mechanics of the problem you are solving. Another example would be the M4 macro language.
Remember a language is not magic, it is just a collection of defined grammatical structures with implied semantics. SQL is a good example of a language for a purpose, with that purpose defined in it's syntax and semantics.
Learning how languages work, what makes a language parsable, what makes semantics sensible and the implementation of this, I think can make you a better programmer.
compilers embody alot of theory that underpins computer science:
Translation, abstraction, interpretation, data structures, state .... the list goes on. Learning these things will make you understand the implications of your program and what goes on under the hood. You can of course learn things independently but compilers are a great context to learn complex topics such as DFA/NDFA automata, stack-based parsers, abstract syntax trees ....
compilers are beautiful machines I think :)
Multiple reasons:
bragging rights
economic incentives
extreme boredom
dissatisfaction with the hundreds of existing languages
untreated insanity
desire to implement language that facilitates new design concepts (like languages that make design patterns more straightforward to incorporate)
other reasons, perhaps
I think Jeff Attwood answers this well in this Coding Horror post -- though he's talking about a more general issue (why create any new library, framework, etc, when other artifacts in the same design space already exist), I suspect that exactly said broader viewpoint gives him a different and interesting perspective.
I will add that if you write a semantics, so that your language is an actual language and not merely what happens to be accepted by some particular implementation, you will learn an enormous amount about how to describe computational behaviors precisely:
You will learn what kinds of behaviors are and are not easy to describe—and prove correct.
You will learn how to trade off different kinds of formalisms for describing different kinds of features.
You will ultimately be a better programmer because the formalism and proof techniques you will learn will apply to all kinds of problems: locking techniques, safety properties in kernels, lock-free data structures, network protocols, and information security, to name just a few. All these areas are amenable to the same kind of formal treatment that is given to a programming language.
To pick just one example, if you give your language a static type system and you then prove that a well-type program is guaranteed to be memory-safe, you will learn just as much (on a different dimension) as you will by writing an interpreter or compiler.
EDIT: If you want to learn this stuff I think the easiest starting point is Benjamin Pierce's series of two books on Types and Programming Languages. There is also a graduate textbook by Glynn Winskel which is a little harder but more oriented toward semantics and proof techniques.
Creating Domain Specific Languages is very valuable. Instead of thinking only about general purpose languages, consider creating so-called "little languages" that clearly express abstractions in your project.
For example, in a recent project I decided to use a Command Pattern to drive a Service Layer. I found some repetition in my command code, so I wrote a little compiler that accepts a simple language that expresses commands and emits command implementations in the "underlying" language.
For the same reason that taking a Compiler Construction course at university will benefit you even if you never write a single compiler in your whole life. It's a look under the hood, if you may.
In addition to what altCognito said, which is a theoretical/academic perspective, some highly specialized languages are created to solve specific problems efficiently when existing "general-purpose" languages are either extremely inefficient for your task or there just isn't an easy-to-use existing alternative.
Granted, that such cases tend to be rare and if your first instinct on encountering a problem is "I need a new language for this.", then it is most likely you're missing something. There needs to be a fairly substantial gap in "available" tech and and your needs to warrant such an undertaking.
I think there are really two conceptually different answers to this. First, you gain an understanding of how compilers transform your code into executable code. This can help you make better decisions about how to structure your code to optimize (or allow it to be optimized) better. If, for instance, you knew that a certain construct would prohibit the compiler from inlining a code block or unrolling a loop, then you could avoid that if performance became a real concern.
Second, all current languages were invented (or derived) at some point in history. For each one of these, the likelihood that it would actually be used was potentially small, yet here they are. They all found their reason for being in the fact that someone wanted to do something that wasn't possible or easy to do in an existing language and decided to do something about it. Laziness (or the desire to let the computer do the work for you) is the mother of invention.
Just for fun... and then you'll realize that you cannot make anything better than all the languages that you thought they sucked xD (so you stop complaining about them).
how will the process of creating a new language make you a better programmer?
You're right, you may or may not use the language, but at the least the experience you will gain from doing it will benefit you to understand the implementation of programming languages and of certain things that you will be able to apply to future computation problems that you run into.
Writing a compiler or interpreter requires a very firm understanding in computer science theory. And if you're compiling to machine code instead of to another language, it requires a firm understanding in hardware design as well.
In addition to that, knowing how to design a compiler means you will have a better understanding of languages in general, and the languages you work with specifically. You will have a better appreciation for syntax and trade-offs the language designers took when they wrote their specification.
It's not that writing compilers makes you a better programmer. It's the deep understanding of language theory and compiler design that makes you better.
Mostly you do this for fun or to broaden your comprehension of a subject.
I disagree that creating new language influences performance - performance of what? IMHO execution speed should not depend on the language constructs but what the language is translated to - which is something different: like creating a syntax for a language and writting a compiler/virtual machine for it.
Because a talking frog is pretty neat.
I want a managed language that permits tinkering with its internals as standard practice. Kind of like Ruby's duck punching on a wider scale.
I should, as the client of a library, be able to swap out library functions that don't do what I want.
That's what drives me crazy with .NET. There are bugs in the framework Microsoft will not fix and thanks to GAC signing I cannot. And even if it were not for GAC signing, hotpatching a global library is a bad idea (might break some other application).
I for one don't care about how compilers work, don't care about learning new languages, and don't care about using scripting languages like perl and javascript. I'm much more interested in the ways big programs are constructed (or should be constructed). There are still no good solutions for making LARGE software as easy to use as prototyped code. Programming languages are not helping with that. They solve trivial problems like sorting and memory deallocation, and leave you struggling alone with problems that really matter (that keep you or your firm from losing money).

Resources