Inadvertent overconstraint: A downside of all declarative styles of description? - alloy

Page 163 of the book Software Abstractions has this remarkable statement:
The declarative style of description is very powerful, but it has a
downside: inadvertent overconstraint.
Wow!
Is "inadvertent overconstraint" a problem that occurs in all declarative languages? For example, does inadvertent overconstraint occur in XML Schema? Does inadvertent overconstraint occur in SQL?
In other words, is the "inadvertent overconstraint" problem peculiar to Alloy, or is it widespread and applicable to all declarative languages? If it's the latter, I would be most grateful for an explanation.

It's the flipside of the benefit that conjunction brings. In a programming language, you can't write a sort by saying "the output is a permutation of the input and the elements are in order", but you can say that in a specification language that allows and. This gives a lot of power, especially by enabling separation of concerns, and it lets you start with minimal constraints and then grow them. But there is a downside: add too many constraints, and you might end up at false (or worse, something just a bit stronger than you intended). Alloy's unsat core facility is helpful at dealing with this, but doesn't solve the problem entirely.

Related

Large-scale design in Haskell? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 6 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What is a good way to design/structure large functional programs, especially in Haskell?
I've been through a bunch of the tutorials (Write Yourself a Scheme being my favorite, with Real World Haskell a close second) - but most of the programs are relatively small, and single-purpose. Additionally, I don't consider some of them to be particularly elegant (for example, the vast lookup tables in WYAS).
I'm now wanting to write larger programs, with more moving parts - acquiring data from a variety of different sources, cleaning it, processing it in various ways, displaying it in user interfaces, persisting it, communicating over networks, etc. How could one best structure such code to be legible, maintainable, and adaptable to changing requirements?
There is quite a large literature addressing these questions for large object-oriented imperative programs. Ideas like MVC, design patterns, etc. are decent prescriptions for realizing broad goals like separation of concerns and reusability in an OO style. Additionally, newer imperative languages lend themselves to a 'design as you grow' style of refactoring to which, in my novice opinion, Haskell appears less well-suited.
Is there an equivalent literature for Haskell? How is the zoo of exotic control structures available in functional programming (monads, arrows, applicative, etc.) best employed for this purpose? What best practices could you recommend?
Thanks!
EDIT (this is a follow-up to Don Stewart's answer):
#dons mentioned: "Monads capture key architectural designs in types."
I guess my question is: how should one think about key architectural designs in a pure functional language?
Consider the example of several data streams, and several processing steps. I can write modular parsers for the data streams to a set of data structures, and I can implement each processing step as a pure function. The processing steps required for one piece of data will depend on its value and others'. Some of the steps should be followed by side-effects like GUI updates or database queries.
What's the 'Right' way to tie the data and the parsing steps in a nice way? One could write a big function which does the right thing for the various data types. Or one could use a monad to keep track of what's been processed so far and have each processing step get whatever it needs next from the monad state. Or one could write largely separate programs and send messages around (I don't much like this option).
The slides he linked have a Things we Need bullet: "Idioms for mapping design onto
types/functions/classes/monads". What are the idioms? :)
I talk a bit about this in Engineering Large Projects in Haskell and in the Design and Implementation of XMonad. Engineering in the large is about managing complexity. The primary code structuring mechanisms in Haskell for managing complexity are:
The type system
Use the type system to enforce abstractions, simplifying interactions.
Enforce key invariants via types
(e.g. that certain values cannot escape some scope)
That certain code does no IO, does not touch the disk
Enforce safety: checked exceptions (Maybe/Either), avoid mixing concepts (Word, Int, Address)
Good data structures (like zippers) can make some classes of testing needless, as they rule out e.g. out of bounds errors statically.
The profiler
Provide objective evidence of your program's heap and time profiles.
Heap profiling, in particular, is the best way to ensure no unnecessary memory use.
Purity
Reduce complexity dramatically by removing state. Purely functional code scales, because it is compositional. All you need is the type to determine how to use some code -- it won't mysteriously break when you change some other part of the program.
Use lots of "model/view/controller" style programming: parse external data as soon as possible into purely functional data structures, operate on those structures, then once all work is done, render/flush/serialize out. Keeps most of your code pure
Testing
QuickCheck + Haskell Code Coverage, to ensure you are testing the things you can't check with types.
GHC + RTS is great for seeing if you're spending too much time doing GC.
QuickCheck can also help you identify clean, orthogonal APIs for your modules. If the properties of your code are difficult to state, they're probably too complex. Keep refactoring until you have a clean set of properties that can test your code, that compose well. Then the code is probably well designed too.
Monads for Structuring
Monads capture key architectural designs in types (this code accesses hardware, this code is a single-user session, etc.)
E.g. the X monad in xmonad, captures precisely the design for what state is visible to what components of the system.
Type classes and existential types
Use type classes to provide abstraction: hide implementations behind polymorphic interfaces.
Concurrency and parallelism
Sneak par into your program to beat the competition with easy, composable parallelism.
Refactor
You can refactor in Haskell a lot. The types ensure your large scale changes will be safe, if you're using types wisely. This will help your codebase scale. Make sure that your refactorings will cause type errors until complete.
Use the FFI wisely
The FFI makes it easier to play with foreign code, but that foreign code can be dangerous.
Be very careful in assumptions about the shape of data returned.
Meta programming
A bit of Template Haskell or generics can remove boilerplate.
Packaging and distribution
Use Cabal. Don't roll your own build system. (EDIT: Actually you probably want to use Stack now for getting started.).
Use Haddock for good API docs
Tools like graphmod can show your module structures.
Rely on the Haskell Platform versions of libraries and tools, if at all possible. It is a stable base. (EDIT: Again, these days you likely want to use Stack for getting a stable base up and running.)
Warnings
Use -Wall to keep your code clean of smells. You might also look at Agda, Isabelle or Catch for more assurance. For lint-like checking, see the great hlint, which will suggest improvements.
With all these tools you can keep a handle on complexity, removing as many interactions between components as possible. Ideally, you have a very large base of pure code, which is really easy to maintain, since it is compositional. That's not always possible, but it is worth aiming for.
In general: decompose the logical units of your system into the smallest referentially transparent components possible, then implement them in modules. Global or local environments for sets of components (or inside components) might be mapped to monads. Use algebraic data types to describe core data structures. Share those definitions widely.
Don gave you most of the details above, but here's my two cents from doing really nitty-gritty stateful programs like system daemons in Haskell.
In the end, you live in a monad transformer stack. At the bottom is IO. Above that, every major module (in the abstract sense, not the module-in-a-file sense) maps its necessary state into a layer in that stack. So if you have your database connection code hidden in a module, you write it all to be over a type MonadReader Connection m => ... -> m ... and then your database functions can always get their connection without functions from other modules having to be aware of its existence. You might end up with one layer carrying your database connection, another your configuration, a third your various semaphores and mvars for the resolution of parallelism and synchronization, another your log file handles, etc.
Figure out your error handling first. The greatest weakness at the moment for Haskell in larger systems is the plethora of error handling methods, including lousy ones like Maybe (which is wrong because you can't return any information on what went wrong; always use Either instead of Maybe unless you really just mean missing values). Figure out how you're going to do it first, and set up adapters from the various error handling mechanisms your libraries and other code uses into your final one. This will save you a world of grief later.
Addendum (extracted from comments; thanks to Lii & liminalisht) —
more discussion about different ways to slice a large program into monads in a stack:
Ben Kolera gives a great practical intro to this topic, and Brian Hurt discusses solutions to the problem of lifting monadic actions into your custom monad. George Wilson shows how to use mtl to write code that works with any monad that implements the required typeclasses, rather than your custom monad kind. Carlo Hamalainen has written some short, useful notes summarizing George's talk.
Designing large programs in Haskell is not that different from doing it in other languages.
Programming in the large is about breaking your problem into manageable pieces, and how to fit those together; the implementation language is less important.
That said, in a large design it's nice to try and leverage the type system to make sure you can only fit your pieces together in a way that is correct. This might involve newtype or phantom types to make things that appear to have the same type be different.
When it comes to refactoring the code as you go along, purity is a great boon, so try to keep as much of the code as possible pure. Pure code is easy to refactor, because it has no hidden interaction with other parts of your program.
I did learn structured functional programming the first time with this book.
It may not be exactly what you are looking for, but for beginners in functional programming, this may be one of the best first steps to learn to structure functional programs - independant of the scale. On all abstraction levels, the design should always have clearly arranged structures.
The Craft of Functional Programming
http://www.cs.kent.ac.uk/people/staff/sjt/craft2e/
I'm currently writing a book with the title "Functional Design and Architecture". It provides you with a complete set of techniques how to build a big application using pure functional approach. It describes many functional patterns and ideas while building an SCADA-like application 'Andromeda' for controlling spaceships from scratch. My primary language is Haskell. The book covers:
Approaches to architecture modelling using diagrams;
Requirements analysis;
Embedded DSL domain modelling;
External DSL design and implementation;
Monads as subsystems with effects;
Free monads as functional interfaces;
Arrowised eDSLs;
Inversion of Control using Free monadic eDSLs;
Software Transactional Memory;
Lenses;
State, Reader, Writer, RWS, ST monads;
Impure state: IORef, MVar, STM;
Multithreading and concurrent domain modelling;
GUI;
Applicability of mainstream techniques and approaches such as UML, SOLID, GRASP;
Interaction with impure subsystems.
You may get familiar with the code for the book here, and the 'Andromeda' project code.
I expect to finish this book at the end of 2017. Until that happens, you may read my article "Design and Architecture in Functional Programming" (Rus) here.
UPDATE
I shared my book online (first 5 chapters). See post on Reddit
Gabriel's blog post Scalable program architectures might be worth a mention.
Haskell design patterns differ from mainstream design patterns in one
important way:
Conventional architecture: Combine a several components together of
type A to generate a "network" or "topology" of type B
Haskell architecture: Combine several components together of type A to
generate a new component of the same type A, indistinguishable in
character from its substituent parts
It often strikes me that an apparently elegant architecture often tends to fall out of libraries that exhibit this nice sense of homogeneity, in a bottom-up sort of way. In Haskell this is especially apparent - patterns that would traditionally be considered "top-down architecture" tend to be captured in libraries like mvc, Netwire and Cloud Haskell. That is to say, I hope this answer will not be interpreted as an attempt replace any of the others in this thread, just that structural choices can and should ideally be abstracted away in libraries by domain experts. The real difficulty in building large systems, in my opinion, is evaluating these libraries on their architectural "goodness" versus all of your pragmatic concerns.
As liminalisht mentions in the comments, The category design pattern is another post by Gabriel on the topic, in a similar vein.
I have found the paper "Teaching Software Architecture Using Haskell" (pdf) by Alejandro Serrano useful for thinking about large-scale structure in Haskell.
Perhaps you have to go an step back and think of how to translate the description of the problem to a design in the first place. Since Haskell is so high level, it can capture the description of the problem in the form of data structures , the actions as procedures and the pure transformation as functions. Then you have a design. The development start when you compile this code and find concrete errors about missing fields, missing instances and missing monadic transformers in your code, because for example you perform a database Access from a library that need a certain state monad within an IO procedure. And voila, there is the program. The compiler feed your mental sketches and gives coherence to the design and the development.
In such a way you benefit from the help of Haskell since the beginning, and the coding is natural. I would not care to do something "functional" or "pure" or enough general if what you have in mind is a concrete ordinary problem. I think that over-engineering is the most dangerous thing in IT. Things are different when the problem is to create a library that abstract a set of related problems.

What does "powerful" mean, when discussing programming languages?

In the context of programming language discussion/comparison, what does the term "power" mean?
Does it have a well defined meaning? Even a poorly defined meaning?
Say if someone says "language X is more powerful than language Y" or asks the same as a question, what do they mean - or what information are they trying to find out?
It does not have a well-defined meaning. In these types of discussions, "language X is more powerful than language Y" usually means little more than "I like language X more than language Y." On the other end of the spectrum, you'll also usually have someone chime in about how any Turing-complete language can accomplish the same tasks as any other Turing-complete language, so that neither is strictly more powerful than the other.
I think a good meaning for it is expressivity. When a language is highly expressive, it means less code is required to express concepts. To me, this doesn't just mean that you have to write less code to accomplish the same tasks, but also that the code is easily readable by humans. Of course, generally (to a point), having fewer lines of code to read makes the task of reading and understanding easier for humans.
Having a "powerful" standard library comes into play here along the same lines. If a language comes equipped with thorough, complete libraries, then idiomatic code in that language will be able to benefit from the existing library code and not have to repeat or reinvent common functionality in application code. The end result is, again, having to write and read less code to accomplish the same tasks.
I keep saying "generally" and "to a point", because once a language gets too terse, it gets more difficult for humans to decipher. I suppose at this extreme, a language may still be considered "more powerful" (or even "too powerful"). So I guess I'm saying my personal interpretation of "powerful" includes some aspects of "useful" and "readable" in it as well.
C is powerful, because it is low level and gives you access to hardware. Python is powerful because you can prototype quickly. Lisp is powerful because its REPL gives you fantastic debugging opportunities. SQL is powerful because you say what you want and the DMBS will figure out the best way to do it for you. Haskell is powerful because each function can be tested in isolation. C++ is powerful because it has ten times the number of syntactic constructs that any one person ever needs or uses. APL is powerful since it can squeeze a ten-screen program into ten characters. Hell, COBOL is powerful because... why else would all the banks be using it? :)
"Powerful" has no real technical meaning, but lots of people have made proposals.
A couple of the more interesting ones:
Paul Graham wants to call a language "more powerful" if you can write the same programs in fewer lines of code (or some other sane, sensible measure of program size).
Matthias Felleisen has written a very serious theoretical study called On the Expressive Power of Programming Language.
As someone who knows and uses many programming languages, I believe that there are real differences between languages, and that "power" can be a convenient shorthand to describe ways in which one language might be better than another. Nevertheless, whenever I hear a discussion or claim that one language is more powerful than another, I tend to keep one hand firmly on my wallet.
The only meaningful way to describe "power" in a programming language is "can do what I require with the least amount of resources" where "resources" is defined as "whatever costs I'd rather not pay" and could, thus, be development time, CPU time, memory space, money, etc.
So basically the definition of "power" is purely subjective and rendered meaningless in any objective discussion.
Powerful means "high in power". "Power" is something that increases your ability to do things. "Things" vary in shape, size and other things. Loosely speaking therefore, "powerful" when applied to a programming language means that it helps you to do perform your tasks quickly and efficiently.
This makes "powerful" somewhat well defined but not constant across domains. A language powerful in one domain might be crippling in another eg. C is very powerful if you want to do systems level programming since it gives you direct access to the machine and hardware and structures that let you code much faster than you would in assembly. C compilers also produce tight code that runs fast. However, once you move to web applications, C can become very "unpowerful" and crippling since it's so much effort to get something up and running and you have to worry about a lot of extraneous details like memory etc.
Sometimes, languages are "powerful" in multiple domains. This gives them a general "powerful" tag (or badge since were are on SO here). PG's claim is that with LISP, this is the case. That might be true or might not be.
At the end of the day, "powerful" is a loaded word so you should evaluate who is saying it, why he's saying it and what it means to to your work.
There are really only two meanings people are worried about:
"Powerful" in the sense of "takes less resources (time, money, programmers, LOC, etc.) to achieve the same/better result", and "powerful" in the sense of "is capable of doing a wide range of tasks".
Some languages are extrememly resource-effective for a small range of tasks. Others are not so resource-effective but can be applied to a wide range of tasks (e.g. C, which is often used in OS development, creation of compilers and runtime libraries, and work with microcontrollers).
Which of these two meanings someone has in mind when they use the term "powerful" depends on the context (and even then is not always clear). Indeed often it is a bit of both.
Typically there are two distinct meanings:
Expressive, meaning the code tends to be very short and understandable
Low level, meaning you have very fine-grained control over the hardware.
For the most languages, these two definitions are at opposite ends of the spectrum: Python is very expressive but not very low level; C is very low level but not very expressive. Depending on which definition you pick, either language is powerful or not powerful.
nothing absolutely nothing.
To high level programmers it might mean alot of available datatypes built in. Or maybe abstractions to easily create or follow Design Patterns.
Paul Graham is a very high level guy here is what he has to say:
http://www.paulgraham.com/avg.html
Java guys might tell you something about portability, the power to reach every platform.
C/UNIX programmers may tell you that its speed and efficiency, complete control over every inch of memory.
VHDL/Verilog programmers will tell you its complete control over every clock and gate so as to not waste any electricity or time.
But in my opinion a "powerful language" supports all of the features for you to complete your task. Documentation may be important, or perhaps it is portability, or the ability to do graphics. It could be anything, writing a gui from Assembly is just stupid, so is trying to design an embedded processor in flash.
Choosing a language that suits your needs perfectly will always feel like power.
I view the term as marketing fluff, no one well-defined meaning.
If you consider, say, Assembler, C, and C++. On occasions one drops from C++ "down" to C for particualr needs, and in turn from C down to assembler. So that make assembler the most powerful because it's the only language that can do everything. Or, to argue the other way, a single line of C++ code can replace several of C (hiding polymorphic dispatch via function pointers for example) and a single line of C replaces many of assembler. So C++ is more powerful because one line does "more".
I think the term had some currency when products such as early databases and spreadsheets had in-built languages, some quite restricted. So vendors would tout their language as being "powerful" because it was less restricted.
It can have several meanings. In the very basic sense there's power as far as what is computable. In that sense the most powerful languages are Turing Complete which includes pretty much every general purpose programming language (as opposed to most markup languages and domain specific languages which are often not Turing complete).
In a more pragmatic sense it often refers to how concisely (and readably) you can do certain things. Basically how easy is it to do certain tasks in one language compared to another.
What language is more powerful (besides being somewhat subjective) depends heavily on what you're trying to do. If your requirements are to get something running on a small device with 64k of memory you're likely not going to be using Java. Most likely the right language would be C or C++ (or if you're really hard core assembly). If you need a very simple CRUD app done in 1 day, maybe something like Ruby On Rails would be the way to go (I know Rails is a framework and Ruby is the language, but these days what libraries and frameworks are available factor greatly into picking a language)
I think that, perhaps coincidentally, the physics definition of power is relevant here: "The rate at which work is performed."
Of course, a toaster does not perform very quickly the work of putting out fires. Similarly, the power of a programming language is not universal, but specific to the domain or task to which it is being applied. C is a powerful language for writing device drivers or implementations of higher-level languages; Python is a powerful language for writing general-purpose applications; XPath is a powerful language for writing queries on structured data sets.
So given a problem domain, the power of a language can be said to be the rate at which a competent programmer is able to use it to solve problems in that domain.
A precise answer can be tried to reach, by not assuming that the elements that define "powerful" (in the context of languages) come from so many dimensions.
See how many could be, and a lot will be missing:
runtime speed
code size
expressiveness
supported paradigms
development / debugging time
domain specialization
standard libs
codebase
toolchain ecosystem
portability
community
support / documentation
popularity
(add more here)
These and more parameters draw together X picture of how "programming in some language" would be like at X level. That will be only the definition, though, the only real knowledge comes with the actual practice of using the language, but i digress.
The question comes down to which parameter will represent the intrinsic quality of a language. If you refer to a language in itself, its ultimate, intrinsic purpose is "express things", and thus the most representative parameter is rightfully expressiveness, and is also one that resonates frequently when someone talks about how powerful a language is.
At the moment you try to widen the question/answer to cover more than the expressiveness of the language "as a language, as a tongue", you are more talking about different kinds of "environment", social environment, development environment, commercial environment, etc.
Depending of the complexity of the environment to be defined you'll have to mix more parameters that come from multiple, vast, overlapping and sometimes contradictory dimensions, and eventually the point of getting the definition will be lost or the question will have to be narrowed.
This approximation still won't answer "what is an expressive language", but, again, a common understanding are the definitions that Vineet well points out in its answer, and Forest remarks in the comments. I agree, for me "expression" is "conveying meaning".
I remember many instructors in college calling whatever language they were teaching "powerful".
Leads me to think:
Powerful = a relative term comparing the latest way to code something vs. the original or previous way.
I find it useless to use the word "powerful" in regards to discussing anything software related. Every time my professor in college would introduce a new concept such as polymorphism he would say "so this is a really powerful feature". After a while I got annoyed. If everything is powerful then nothing is. It's all the same. You can write code to do anything. Does is really matter how much code is required to do it? You can say it's short or efficient but powerful is just useless. Nuclear energy is powerful. Code is words.
I think that power would normally refer to how quickly it can process data, for example I found that in python as soon as a list exceeds a length of approx. 2000 it becomes unbearably slow whereas in C++ a list can easily contain 20,000 entries without doing so.

Why are there not more control structures in most programming languages?

Why do most languages seem to only exhibit fairly basic control structures from a logic point of view? Stuff like If ... then, Else..., loops, For each, switch statement, etc. The standard list seems fairly basic from a logic point of view.
Why is there not much more in the way of logic syntactical sugar? Perhaps something like a proposition engine, where you could feed an array of premises or functions that return complicated self referential interdependent functions and results. Something where you could chain together a complex array of conditions, but represented in a way that was easy and clear to read in the code.
Premise 1
Premise 2 if and only if Premise 1
Premise 3
Premise 4 if Premise 2 and Premise 3
Premise 5 if and only if Premise 4
etc...
Conclusion
I realize that this kind of logic this can be constructed in functions and/or nested conditional statements. But why are there not generally more syntax options for structuring these kind of logical propositions without resulting in hairy looking conditional statements that can be hard to read and debug?
Is there an explanation for the kinds of control structures we typically see in mainstream programming languages? Are there specific control structures you would like to see directly supported by a language's syntax? Does this just add unnecessary complexity to the language?
Have you looked a Prolog? A Prolog program is basically a set of rules that is turned into one big evaluation engine.
From my personal experience Prolog is a bit too weird and I actually prefer ifs, whiles and so on but YMMV.
Boolean algebra is not difficult, and provides a solution for any conditionals you can think of, plus an infinite number of other variants.
You might as well ask for special syntax for "commonly-used" arithmetic expressions. Who is to say what qualifies as commonly-used? And where do you stop adding special-case syntax?
Adding to the complexity of a language parser is not preferable to using constructive expression syntax, combined with extensibility through defining functions.
It's been a long time since my Logic class in college but I would guess it's a mixture of difficulty in writing them into the language vs. the frequency with which they'd be used. I can't say I've ever had the need for them (not that I can recall). For those times that you would require something of that ilk the language designers probably figure you can work out the logic yourself using just the basic structures.
Just my wild guess though.
Because most programming languages don't provide sufficient tools for users to implement them, it is not seen as an important enough feature for the implementer to provide as an extension, and it isn't demanded enough or used enough to be added to the standard.
If you really want it, use a language that provides it, or provides the tools to implement it (for instance, lisp macros).
It sounds as though you are describing a rules engine.
The basic control algorithms we use mirror what processor can do efficiently. Basicly this boils down to simple test-and-branches.
It may seem limiting to you, but many people don't like the idea of writing a simple-looking line of code that requires hundreds or thousands (or millions) of processor cycles to complete. Among these people are systems software folks, who write things like Operating Systems and compilers. Naturally most compilers are going to reflect their own writer's concerns.
It relates to the concern regarding atomicity. If you can express A,B,C,D in simpler structures Y, Z, why not simply not supply A,B,C,D but supply Y, Z instead?
The existing languages reflect 60 years of the tension between atomicity and usability. The modern approach is "small language, large libraries". (C#, Java, C++, etc).
Because computers are binary, all decisions must come down to a 1/0, yes/no, true/false, etc.
To be efficient, the language constructs must reflect this.
Eventually all your code goes down to a micro-code that is executed one instruction at a time. Until the micro-code and accompanying CPU can describe something more colorful, we are stuck with a very plain language.

Should programming languages be intuitive?

What features could be added to a new programming language
to make it more "intuitive"? When it comes to websites and
desktops, we favor high usability, almost intuitive
usability. It is becoming increasingly expected that your
application should "just work". For a certain class of
applications the idea that one has to RTFM, is a mark
against the effectiveness of the application. People tend to
expect the application to just work the way they "think" it
should work. One could argue that this is a worthy standard
that designers should strive for.
Can the same usability rigor apply to programming languages
and developer environments? I realize there are tools like
IntelliSense that provide hints, and a good IDE provides a
lot of assist. But what about the core language itself? What
could be added (or removed) that makes certain programming
techniques or algorithms more obvious to implement? How does
one make regular expressions or recursion more intiutive? Or
is this just folly?
Take a more concrete example: liquid layouts in HTML, CSS,
or Flex and MXML. In HTML and CSS, the box model is anything
but intuitive given the different implementations of
Internet Explorer and the other browsers. And unless someone
reads the documentation or studies the concept of the box
model it would be difficult to "just get it" when designing
a layout on one's first stab at CSS. I would argue this is
why tables thrived in the early days. The box model was
implicit in the concept of a table cell. With the help of
tools like Dreamweaver one could get their mind around
percentage widths and layout within the constraints of table
cells. Then CSS came into maturity and a whole set of valid
reasons emerged for why tables are not for layout. But to
achieve the same effects designers had to really study the
CSS implementations and the box model, and inject a new
layer of abstraction into their thinking.
In another example, I find when programming lots of things
in ActionScript and MXML, the whole concept of fluid layouts
and percentage based widths of elements not very obvious and
doesn't always follow intuition. I understand the basic
problem in that the Adobe Flash player and the layout need
to understand things in absolute pixel terms. When it comes
to the potential width of a component, I understand why
percentages are not immediately obvious to implement at the
core level of the code. Theoretically speaking the Flash
Player needs to know (or calculate) the exact width of a
component so that it can provide the proper geometry to the
video card when doing a draw on the screen. But when you
introduce some concept of percentages then you introduce the
theoretical possibility of an infinite width. And to find
"infinity - 1" pixels is not something a computer can
directly do without some layer of abstraction and
calculation. The viewport must be referenced. The program
must know its boundaries. So absolute widths are the norm,
although humans might prefer to design in terms of
percentages.
When it comes to programming languages can there be
expressions and features that assist intuition when thinking
about a programming task. Or are we better off "thinking
like a computer" and just RTFM'ing the manual when we need
to understand how to implement some feature or layout in
code?
If you could change the syntax or semantics of your
programming language of choice what would you add, change,
or remove to improve the "intuitiveness" of it?
Addendum, the reason for asking this question is inspired by
seeing example of what "novices" were able to achieve in
Smalltalk in Alan Kay's lecture: Doing with Images Makes
Symbols.
"If you could change the syntax or semantics of your programming language of choice what would you add, change, or remove to improve the "intuitiveness" of it?
"
Programming is hard. Really hard. Syntax changes don't matter much. IDE's are irrelevant to the fundamental challenge of programming.
The thing that is often baffling is the semantics of the language.
I don't know what "intuitive" means with respect to a thing as abstract as a programming language. Indeed, "intuition" is probably a bad thing. Coming to a programming language with intuition means preconceived notions, biases and intellectual junk will take over.
I would never expect to "just get it" for anything on any level anywhere. Programming requires clear thinking -- not "intuition" -- not "expectation".
The only thing we can ever do is read the manual and understand the unique, distinct, novel semantics of the new thing we're confronted with.
I do know this: elegant simplicity is essential. Orthogonality of features. Clarity. Precision. Absence of exceptions or special cases. Above all, simplicity.
Layering on language features is fundamentally bad.
Covering language problems by layering in a complex IDE is worse.
See http://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD854.html
"when faced with something new and unfamiliar we try to relate it to what we are familiar with. In the course of the process we invent the analogies that enable us to do so.
It is clear that the above way of trying to understand does not work too well when we are faced with something so radically new, so without precedent, that all analogies we can come up with are too weak and too shallow to be of great help. A radically new technology can create such circumstances and the wide-spread misunderstanding about programming strongly suggests that this has happened with the advent of the automatic computer. "
In short, "intuition" and "intellectual baggage" is the problem of the programmer. The best way to understand a technology is to approach it as something fresh, new and otherwise unknown.
Bottom Line.
The complexity is inherent.
You have two choices.
Develop intellectual tools (i.e., abstraction, summarization, etc.) to cope with it.
Get a job in another field.
Asking for the inherently complicated world of computing to morph into something any one person finds "intuitive" can't happen. Computing is too complicated to be "intuitive".
Another field I've seen that addresses the complexity of the "syntax" of a programming languages is that of Visual Programming Languages. The basic idea behind VPLs is to take the constructs of programming languages (decisions, subroutines, functions, etc.) and represent them graphically, typically as a data-flow diagram. One such language that's gaining popularity recently is the Microsoft Visual Programming Language. I have not used it, and cannot make claims as to its power, but I have used LabView to great effect and I can say that you can do pretty much anything you can think of even in LabView -- but you do have to think of it in a very different way.
That said, I find I have a personal preference for code rather than VPLs.
One step folks are taking that has as much to do with base class library as it does the language itself -- although to be honest, the two are often synonymous -- is the concept of a Fluent API. The basic idea is to make code "read like a sentence", the idea being that this makes the code more flexible and maintainable.

Why create a new programming language? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the real benefit of creating a new programming language? It is highly unlikely that you are going to actually use it.
In short, how will the process of creating a new language make you a better programmer?
You will understand the decisions behind language design and garner a better overall understanding of the compromises made between readability, performance, and reliability.
Your familiarity with concepts such as recursion, closures, garbage collection, reference management, typing, data structures and how these things actually work will increase. Most programmers will utilize resources and language features better.
Similar to the way we learn new ways to code solutions when we use other languages, when we write our own languages, we explore new ways to create solutions. See Metaprogramming. Contrary to the what the question suggests, Domain Specific Languages are used in many environments.
If you're writing a compiler, you'll learn more about how computers work than you ever did before. (Depending on your goal, perhaps more than you intended to learn)
When I wrote my own sort routines in school, even re-implementations of good ones, it really drove home some of the weaknesses of some of the algorithms.
In short, there's an order of magnitude of difference in a programmer who knows how to use tools, and a programmer who knows how to make tools.
I can speak from experience here ...
Fun, Domain specific problem solving, Complexity in context
I love creating new languages for fun, and for tackling domain specific problems. A very simple example might be Wikipedia markup or something as complex as Erlang which specializes in concurrent processing.
Many general purpose languages are similar, because they are general purpose. Sometimes you need a more accurate abstraction of the mechanics of the problem you are solving. Another example would be the M4 macro language.
Remember a language is not magic, it is just a collection of defined grammatical structures with implied semantics. SQL is a good example of a language for a purpose, with that purpose defined in it's syntax and semantics.
Learning how languages work, what makes a language parsable, what makes semantics sensible and the implementation of this, I think can make you a better programmer.
compilers embody alot of theory that underpins computer science:
Translation, abstraction, interpretation, data structures, state .... the list goes on. Learning these things will make you understand the implications of your program and what goes on under the hood. You can of course learn things independently but compilers are a great context to learn complex topics such as DFA/NDFA automata, stack-based parsers, abstract syntax trees ....
compilers are beautiful machines I think :)
Multiple reasons:
bragging rights
economic incentives
extreme boredom
dissatisfaction with the hundreds of existing languages
untreated insanity
desire to implement language that facilitates new design concepts (like languages that make design patterns more straightforward to incorporate)
other reasons, perhaps
I think Jeff Attwood answers this well in this Coding Horror post -- though he's talking about a more general issue (why create any new library, framework, etc, when other artifacts in the same design space already exist), I suspect that exactly said broader viewpoint gives him a different and interesting perspective.
I will add that if you write a semantics, so that your language is an actual language and not merely what happens to be accepted by some particular implementation, you will learn an enormous amount about how to describe computational behaviors precisely:
You will learn what kinds of behaviors are and are not easy to describe—and prove correct.
You will learn how to trade off different kinds of formalisms for describing different kinds of features.
You will ultimately be a better programmer because the formalism and proof techniques you will learn will apply to all kinds of problems: locking techniques, safety properties in kernels, lock-free data structures, network protocols, and information security, to name just a few. All these areas are amenable to the same kind of formal treatment that is given to a programming language.
To pick just one example, if you give your language a static type system and you then prove that a well-type program is guaranteed to be memory-safe, you will learn just as much (on a different dimension) as you will by writing an interpreter or compiler.
EDIT: If you want to learn this stuff I think the easiest starting point is Benjamin Pierce's series of two books on Types and Programming Languages. There is also a graduate textbook by Glynn Winskel which is a little harder but more oriented toward semantics and proof techniques.
Creating Domain Specific Languages is very valuable. Instead of thinking only about general purpose languages, consider creating so-called "little languages" that clearly express abstractions in your project.
For example, in a recent project I decided to use a Command Pattern to drive a Service Layer. I found some repetition in my command code, so I wrote a little compiler that accepts a simple language that expresses commands and emits command implementations in the "underlying" language.
For the same reason that taking a Compiler Construction course at university will benefit you even if you never write a single compiler in your whole life. It's a look under the hood, if you may.
In addition to what altCognito said, which is a theoretical/academic perspective, some highly specialized languages are created to solve specific problems efficiently when existing "general-purpose" languages are either extremely inefficient for your task or there just isn't an easy-to-use existing alternative.
Granted, that such cases tend to be rare and if your first instinct on encountering a problem is "I need a new language for this.", then it is most likely you're missing something. There needs to be a fairly substantial gap in "available" tech and and your needs to warrant such an undertaking.
I think there are really two conceptually different answers to this. First, you gain an understanding of how compilers transform your code into executable code. This can help you make better decisions about how to structure your code to optimize (or allow it to be optimized) better. If, for instance, you knew that a certain construct would prohibit the compiler from inlining a code block or unrolling a loop, then you could avoid that if performance became a real concern.
Second, all current languages were invented (or derived) at some point in history. For each one of these, the likelihood that it would actually be used was potentially small, yet here they are. They all found their reason for being in the fact that someone wanted to do something that wasn't possible or easy to do in an existing language and decided to do something about it. Laziness (or the desire to let the computer do the work for you) is the mother of invention.
Just for fun... and then you'll realize that you cannot make anything better than all the languages that you thought they sucked xD (so you stop complaining about them).
how will the process of creating a new language make you a better programmer?
You're right, you may or may not use the language, but at the least the experience you will gain from doing it will benefit you to understand the implementation of programming languages and of certain things that you will be able to apply to future computation problems that you run into.
Writing a compiler or interpreter requires a very firm understanding in computer science theory. And if you're compiling to machine code instead of to another language, it requires a firm understanding in hardware design as well.
In addition to that, knowing how to design a compiler means you will have a better understanding of languages in general, and the languages you work with specifically. You will have a better appreciation for syntax and trade-offs the language designers took when they wrote their specification.
It's not that writing compilers makes you a better programmer. It's the deep understanding of language theory and compiler design that makes you better.
Mostly you do this for fun or to broaden your comprehension of a subject.
I disagree that creating new language influences performance - performance of what? IMHO execution speed should not depend on the language constructs but what the language is translated to - which is something different: like creating a syntax for a language and writting a compiler/virtual machine for it.
Because a talking frog is pretty neat.
I want a managed language that permits tinkering with its internals as standard practice. Kind of like Ruby's duck punching on a wider scale.
I should, as the client of a library, be able to swap out library functions that don't do what I want.
That's what drives me crazy with .NET. There are bugs in the framework Microsoft will not fix and thanks to GAC signing I cannot. And even if it were not for GAC signing, hotpatching a global library is a bad idea (might break some other application).
I for one don't care about how compilers work, don't care about learning new languages, and don't care about using scripting languages like perl and javascript. I'm much more interested in the ways big programs are constructed (or should be constructed). There are still no good solutions for making LARGE software as easy to use as prototyped code. Programming languages are not helping with that. They solve trivial problems like sorting and memory deallocation, and leave you struggling alone with problems that really matter (that keep you or your firm from losing money).

Resources