pattern/conventions followed by companies to name keywords - naming

In computer programming, a naming convention is a set of rules for choosing the character sequence to be used for identifiers which denote variables, types, functions, and other entities in source code and documentation.
There are several naming conventions in different languages.
But is there any pattern/conventions followed by companies to name keywords and other terms used in programming environment ?

Your question is kind of broad in the sense that is about "terms used in programming environment". First I'm going to reduce the answer to "language keywords":
No there isn't a general pattern for programming language keywords, it depends on each language designer preference and the language "inheritance", ie: Java/JavaScript share some of the keywords inherited from C/C++.
Most of the time the reason to choose a keyword format over the other has to do with the handling of the language grammar to create the parser. For example if you allow keywords that starts with "-", then you need to be careful to not generate ambiguity with expressions like: - value, -value or valueA - value B.
Some languages prefer to reduce keywords to the minimum. For example Lisp and Smalltalk doesn't have many keywords, and even constructs for "if" or loops are handled as function/method calls.
And for the broad sense of "terms used in programming environment ", it the depends a lot on the background of the language designers (rather than in the company that backs them), for example:
Java: method vs Smalltalk: the same concept is split in selector/message/method
JavaScript: inline function vs Haskell: lambda expression
JavaScript: prototype vs Self: parent slot
package/module, and use/import/include; are most of the time aliases of the same concepts

Related

How computer languages are made using theory of automata concept?

I tried really hard to find answer to this question on google engine.
But I wonder how these high level programming languages are created in principle of automata or is automata theory not included in defining the languages?
Language design tends to have two important levels:
Lexical analysis - the definition of what tokens look like. What is a string literal, what is a number, what are valid names for variables, functions, etc.
Syntactic analysis - the definition of how tokens work together to make meaningful statements. Can you assign a value to a literal, what does a block look like, what does an if statement look like, etc.
The lexical analysis is done using regular languages, and generally tokens are defined using regular expressions. It's not that a DFA is used (most regex implementations are not DFAs in practice), but that regular expressions tend to line up well with what most languages consider tokens. If, for example, you wanted a language where all variable names had to be palindromes, then your language's token specification would have to be context-free instead.
The input to the lexing stage is the raw characters of the source code. The alphabet would therefore be ASCII or Unicode or whatever input your compiler is expecting. The output is a stream of tokens with metadata, such as string-literal (value: hello world) which might represent "hello world" in the source code.
The syntactic analysis is typically done using a subset of context-free languages called LL or LR parsers. This is because the implementation of CFG (PDAs) are nondeterministic. LL and LR parsing are ways to make deterministic decisions with respect to how to parse a given expression.
We use CFGs for code because this is the level on the Chomsky hierarchy where nesting occurs (where you can express the idea of "depth", such as with an if within an if). Higher or lower levels on the hierarchy are possible, but a regular syntax would not be able to express nesting easily, and context-sensitive syntax would probably cause confusion (but it's not unheard of).
The input to the syntactic analysis step is the token stream, and the output is some form of executable structure, typically a parse tree that is either executed immediately (as in interpretted languages) or stored for later optimization and/or execution (as in compiled languages) or something else (as in intermediate-compiled languages like Java). The alphabet of the CFG is therefore the possible tokens specified by the lexical analysis step.
So this whole thing is a long-winded way of saying that it's not so much the automata theory that's important, but rather the formal languages. We typically want to have the simplest language class that meets our needs. That typically means regular tokens and context-free syntax, but not always.
The implementation of the regular expression need not be an automaton, and the implementation of the CFG cannot be a PDA, because PDAs are nondeterministic, so we define deterministic parsers on reasonable subsets of the CFG class instead.
More generally we talk about Theory of computation.
What has happened through the history of programming languages is that it has been formally proven that higher-level constructs are equivalent to the constructs in the abstract machines of the theory.
We prefer the higher-level constructs in modern languages because they make programs easier to write, and easier to understand by other people. That in turn leads to easier peer-review and team-play, and thus better programs with less bugs.
The Wikipedia article about Structured programming tells part of the history.
As to Automata theory, it is still present in the implementation of regular expression engines, and in most programming situations in which a good solution consists in transitioning through a set of possible states.

What scripting language(s) fits this description?

I like programming language design/implementation and I'd like to contribute to one of the less mature ones. I'm looking for a scripting language that is:
Embeddable
Dynamically, strongly typed
Small & Lightweight (more elaborated later)
Implemented in C++
With lightweight I mean something like Lua, very small standard library that can be easily extended.
And some (random) design principles that I like:
The language should have a few very powerful built-in types, like python (int, float, list/array, map/dictionary, set and tuple).
A function is an object, like in Lua (this makes lambda functions trivial)
Arguments are passed as tuples that automatically get extracted.
And last and probably also least, I like C-style syntax.
If you think about yelling "subjective", "there is no best language" and "not a question", you misread a question. I'm merely asking for a list of scripting languages that match the description above.
Cython
Shedskin
Psyco
These are all scripting languages that are either variants or restricted subsets of the original; python language, that compile to C, C++, or machine code. I believe these should satisfy your req. spec.
Shedskin and psyco also currently have calls for contributions on their main page.
HTH

Generic programming vs. Metaprogramming

What exactly is the difference? It seems like the terms can be used somewhat interchangeably, but reading the wikipedia entry for Objective-c, I came across:
In addition to C’s style of procedural
programming, C++ directly supports
certain forms of object-oriented
programming, generic programming, and
metaprogramming.
in reference to C++. So apparently they're different?
Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data.
Metaprogramming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates programs.
Generic Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data, but makes only the minimum assumptions about the structure of the data, thus maximizing reuse across a wide range of datatypes.
As was already mentioned in several other answers, the distinction can be confusing in C++, since both Generic Programming and (static/compile time) Metaprogramming are done with Templates. To confuse you even further, Generic Programming in C++ actually uses Metaprogramming to be efficient, i.e. Template Specialization generates specialized (fast) programs from generic ones.
Also note that, as every Lisp programmer knows, code and data are the same thing, so there really is no such thing as "metaprogramming", it's all just programming. Again, this is a bit hard to see in C++, since you actually use two completely different programming languages for programming (C++, an imperative, procedural, object-oriented language in the C family) and metaprogramming (Templates, a purely functional "accidental" language somewhere in between pure lambda calculus and Haskell, with butt-ugly syntax, since it was never actually intended to be a programming language.)
Many other languages use the same language for both programming and metaprogramming (e.g. Lisp, Template Haskell, Converge, Smalltalk, Newspeak, Ruby, Ioke, Seph).
Metaprogramming, in a broad sense, means writing programs that yield other programs. E.g. like templates in C++ produce actual code only when instantiated. One can interpret a template as a program that takes a type as an input and produces an actual function/class as an output. Preprocessor is another kind of metaprogramming. Another made-up example of metaprogramming:a program that reads an XML and produces some SQL scripts according to the XML. Again, in general, a metaprogram is a program that yields another program, whereas generic programming is about parametrized(usually with other types) types(including functions) .
EDITED after considering the comments to this answer
I would roughly define metaprogramming as "writing programs to write programs" and generic programming as "using language features to write functions, classes, etc. parameterized on the data types of arguments or members".
By this standard, C++ templates are useful for both generic programming (think vector, list, sort...) and metaprogramming (think Boost and e.g. Spirit). Furthermore, I would argue that generic programming in C++ (i.e. compile-time polymorphism) is accomplished by metaprogramming (i.e. code generation from templated code).
Generic programming usually refers to functions that can work with many types. E.g. a sort function, which can sort a collection of comparables instead of one sort function to sort an array of ints and another one to sort a vector of strings.
Metaprogramming refers to inspecting, modifying or creating classes, modules or functions programmatically.
Its best to look at other languages, because in C++, a single feature supports both Generic Programming and Metaprogramming. (Templates are very powerful).
In Scheme / Lisp, you can change the grammar of your code. People probably know Scheme as "that prefix language with lots of parenthesis", but it also has very powerful metaprogramming techniques (Hygenic Macros). In particular, try / catch can be created, and even the grammar can be manipulated to a point (For example, here is a prefix to infix converter if you don't want to write prefix code anymore: http://github.com/marcomaggi/nausicaa). This is accomplished through metaprogramming, code that writes code that writes code. This is useful for experimenting with new paradigms of programming (the AMB operator plays an important role in non-deterministic programming. I hope AMB will become mainstream in the next 5 years or so...)
In Java / C#, you can have generic programming through generics. You can write one generic class that supports the types of many other classes. For instance, in Java, you can use Vector to create a Vector of Integers. Or Vector if you want it specific to your own class.
Where things get strange, is that C++ templates are designed for generic programming. However, because of a few tricks, C++ templates themselves are turing-complete. Using these tricks, it is possible to add new features to the C++ language through meta-programming. Its convoluted, but it works. Here's an example which adds multiple dispatch to C++ through templates. http://www.eptacom.net/pubblicazioni/pub_eng/mdisp.html . The more typical example is Fibonacci at compile time: http://blog.emptycrate.com/node/271
Generic programming is a very simple form of metacoding albeit not usually runtime. It's more like the preprocessor in C and relates more to template programming in most use cases and basic implementations.
You'll find often in typed languages that you'll create a few implementations of something where only the type if different. In languages such as Java this can be especially painful since every class and interface is defining a new type.
You can generate those classes by converting them to a string literal then replacing the class name with a variable to interpolate.
Where generics are used in runtime it's a bit different, in that case it's simply variable programming, programming using variables.
The way to envisage that is simple, take to files, compare them and turn anything different into a variable. Now you have only one file that is reusable. You only have to specify what's different, hence the name variable.
How generics came about it that not everything can be made variable like the variable type you expect or a cast type. Often there would by a lot of file duplication where the only thing that was variable was the variable types. This was a very common source of duplication. Although there are ways around it or to mitigate it they aren't particularly convenient. Generics have come along as a kind of variable variable to allow making the variable type variable. Because the variable type is something normally expressing in the programming language that can now be specified in runtime it is also considered metacoding, albeit a very simple case.
The effect of not having variability where you need it is to unroll your variables, that is you are forced instead of having a variable to make an implementation for every possible would be variable value.
As you can imagine that is quite expensive. This would be very common when using any kind of reusage object storage library. These will accept any object but in most cases people only want to sore one type of objdct. If you put in a Shop object which extends Object then want to retrieve it, the method signature on the storage object will return simply Object but your code will expect a Shop object. This will break compilation with the downgrade of the object unless you cast it back up to shop. This raises another conundrum as without generics there is no way to check for compatibility and ensure the object you are storing is a Shop class.
Java avoids metaprogramming and tries to keep the language simple using OOP principles of polymorphism instead to make flexible code. However there are some pressing and reoccurring problems that through experience have presented themselves and are addressed with the addition of minimal metaprogramming facilities. Java does not want to be a metaprogramming language but sparingly imports concepts from there to solve the most nagging problems.
Programming languages that offer lavage metacoding facilities can be significantly more productive than languages than avoid it barring special cases, reflection, OOP polymorphism, etc. However it often also takes a lot more skill and expertise to generate un=nderstandable, maintaiable and bug free code. There is also often a performance penalty for such languages with C++ being a bit of an oddball because it is compiled to native.

Non-C++ languages for generative programming?

C++ is probably the most popular language for static metaprogramming and Java doesn't support it.
Are there any other languages besides C++ that support generative programming (programs that create programs)?
The alternative to template style meta-programming is Macro-style that you see in various Lisp implementations. I would suggest downloading Paul Graham's On Lisp and also taking a look at Clojure if you're interested in a Lisp with macros that runs on the JVM.
Macros in Lisp are much more powerful than C/C++ style and constitute a language in their own right -- they are meant for meta-programming.
let me list a few important details about how metaprogramming works in lisp (or scheme, or slate, or pick your favorite "dynamic" language):
when doing metaprogramming in lisp you don't have to deal with two languages. the meta level code is written in the same language as the object level code it generates. metaprogramming is not limited to two levels, and it's easier on the brain, too.
in lisp you have the compiler available at runtime. in fact the compile-time/run-time distinction feels very artificial there and is very much subject to where you place your point of view. in lisp with a mere function call you can compile functions to machine instructions that you can use from then on as first class objects; i.e. they can be unnamed functions that you can keep in a local variable, or a global hashtable, etc...
macros in lisp are very simple: a bunch of functions stuffed in a hashtable and given to the compiler. for each form the compiler is about to compile, it consults that hashtable. if it finds a function then calls it at compile-time with the original form, and in place of the original form it compiles the form this function returns. (modulo some non-important details) so lisp macros are basically plugins for the compiler.
writing a lisp function in lisp that evaluates lisp code is about two pages of code (this is usually called eval). in such a function you have all the power to introduce whatever new rules you want on the meta level. (making it run fast is going to take some effort though... about the same as bootstrapping a new language... :)
random examples of what you can implement as a user library using lisp metaprogramming (these are actual examples of common lisp libraries):
extend the language with delimited continuations (hu.dwim.delico)
implement a js-to-lisp-rpc macro that you can use in javascript (which is generated from lisp). it expands into a mixture of js/lisp code that automatically posts (in the http request) all the referenced local variables, decodes them on the server side, runs the lisp code body on the server, and returns back the return value to the javascript side.
add prolog like backtracking to the language that very seamlessly integrates with "normal" lisp code (see screamer)
an XML templating extension to common lisp (includes an example of reader macros that are plugins for the lisp parser)
a ton of small DSL's, like loop or iterate for easy looping
Template metaprogramming is essentially abuse of the template mechanism. What I mean is that you get basically what you'd expect from a feature that was an unplanned side-effect --- it's a mess, and (although tools are getting better) a real pain in the ass because the language doesn't support you in doing it (I should note that my experience with state-of-the-art on this is out of date, since I essentially gave up on the approach. I've not heard of any great strides made, though)
Messing around with this in about '98 was what drove me to look for better solutions. I could write useful systems that relied on it, but they were hellish. Poking around eventually led me to Common Lisp. Sure, the template mechanism is Turing complete, but then again so is intercal.
Common Lisp does metaprogramming `right'. You have the full power of the language available while you do it, no special syntax, and because the language is very dynamic you can do more with it.
There are other options of course. No other language I've used does metaprogramming better than Lisp does, which is why I use it for research code. There are lots of reasons you might want to try something else though, but it's all going to be tradeoffs. You can look at Haskell/ML/OCaml etc. Lots of functional languages have something approaching the power of Lisp macros. You can find some .NET targeted stuff, but they're all pretty marginal (in terms of user base etc.). None of the big players in industrially used languages have anything like this, really.
Nemerle and Boo are my personal favorites for such things. Nemerle has a very elegant macro syntax, despite its poor documentation. Boo's documentation is excellent but its macros are a little less elegant. Both work incredibly well, however.
Both target .NET, so they can easily interoperate with C# and other .NET languages -- even Java binaries, if you use IKVM.
Edit: To clarify, I mean macros in the Lisp sense of the word, not C's preprocessor macros. These allow definition of new syntax and heavy metaprogramming at compiletime. For instance, Nemerle ships with macros that will validate your SQL queries against your SQL server at compiletime.
Nim is a relatively new programming language that has extensive support for static meta-programming and produces efficient (C++ like) compiled code.
http://nim-lang.org/
It supports compile-time function evaluation, lisp-like AST code transformations through macros, compile-time reflection, generic types that can be parametrized with arbitrary values, and term rewriting that can be used to create user-defined high-level type-aware peephole optimizations. It's even possible to execute external programs during the compilation process that can influence the code generation. As an example, consider talking to a locally running database server in order to verify that the ORM definition in your code (supplied through some DSL) matches the schema of the database.
The "D" programming language is C++-like but has much better metaprogramming support. Here's an example of a ray-tracer written using only compile-time metaprogramming:
Ctrace
Additionally, there is a gcc branch called "Concept GCC" that supports metaprogramming contructs that C++ doesn't (at least not yet).
Concept GCC
Common Lisp supports programs that write programs in several different ways.
1) Program data and program "abstract syntax tree" are uniform (S-expressions!)
2) defmacro
3) Reader macros.
4) MOP
Of these, the real mind-blower is MOP. Read "The Art of the Metaobject Protocol." It will change things for you, I promise!
I recommend Haskell. Here is a paper describing its compile-time metaprogramming capabilities.
Lots of work in Haskell: Domain Specific Languages (DSL's), Executable Specifications, Program Transformation, Partial Application, Staged Computation. Few links to get you started:
http://haskell.readscheme.org/appl.html
http://www.cse.unsw.edu.au/~dons/papers/SCKCB07.html
http://www.haskell.org/haskellwiki/Research_papers/Domain_specific_languages
The ML family of languages were designed specifically for this purpose. One of OCaml's most famous success stories is the FFTW library for high-performance FFTs that is C code generated almost entirely by an OCaml program.
Cheers,
Jon Harrop.
Most people try to find a language that has "ultimate reflection"
for self-inspection and something like "eval" for reifying new code.
Such languages are hard to find (LISP being a prime counterexample)
and they certainly aren't mainstream.
But another approach is to use a set of tools that can inspect,
generate, and manipulate program code. Jackpot is such a tool
focused on Java. http://jackpot.netbeans.org/
Our DMS software reengineering toolkit is
such a tool, that works on C, C++, C#, Java, COBOL, PHP,
Javascript, Ada, Verilog, VHDL and variety of other languages.
(It uses production quality front ends to enable it to read
all these langauges).
Better, it can do this with multiple languages at the same instant.
See http://www.semdesigns.com/Products/DMS/DMSToolkit.html
DMS succeeds because it provides a regular method and support infrastructure for complete access to the program structure as ASTs, and in most cases additional data such a symbol tables, type information, control and data flow analysis, all necessary to do sophisticated program manipulation.
'metaprogramming' is really a bad name for this specific feature, at least when you're discussing more than one language, since this feature is only needed for a narrow slice of languages that are:
static
compiled to machine language
heavily optimised for performance at compile time
extensible with user-defined data types (OOP in C++'s case)
hugely popular
take out any one of these, and 'static metaprogramming', just doesn't make sense. therefore, i would be surprised if any remotely mainstream language had something like that, as understood on C++.
of course, dynamic languages, and several functional languages support totally different concepts that could also be called metaprogramming.
Lisp supports a form of "metaprogramming", although not in the same sense as C++ template metaprogramming. Also, your term "static" could mean different things in this context, but Lisp also supports static typing, if that's what you mean.
The Meta-Language (ML), of course: http://cs.anu.edu.au/student/comp8033/ml.html
It does not matter what language you are using -- any of them is able to do Heterogeneous Generative Metaprogramming. Take any dynamic language such as Python or Clojure, or Haskell if you are a type-fan, and write models in this host language that are able to compile themself into some mainstream language you want or forced to use by your team/employer.
I found object graphs a good model for internal model representation. This graph can mix attributes and ordered subgraphs in a single node, which is native to attribute grammar and AST. So, object graph interpretation can be an effective layer between your host and target languages and can act itself as some sort of no-syntax language defined over data structures.
The closest model is an AST: describe AST trees in Python (host language) targets to C++ syntax (target language):
# from metaL import *
class Object:
def __init__(self, V):
self.val = V
self.slot = {}
self.nest = []
class Module(Object):
def cc(self):
c = '// \ %s\n' % self.head(test=True)
for i in self.nest:
c += i.cc()
c += '// / %s\n' % self.head(test=True)
return c
hello = Module('hello')
# <module:hello> #a04475a2
class Include(Object):
def cc(self):
return '#include <%s.h>\n' % self.val
stdlib = Include('stdlib')
hello // stdlib
# <module:hello> #b6efb657
# 0: <include:stdlib> #f1af3e21
class Fn(Object):
def cc(self):
return '\nvoid %s() {\n}\n\n' % self.val
main = Fn('main')
hello // main
print(hello.cc())
// \ <module:hello>
#include <stdlib.h>
void main() {
}
// / <module:hello>
But you are not limited with the level of abstraction of constructed object graph: you not only can freely add your own types but object graph can interpret itself, can thus can modify itself the same way as lists in a Lisp can do.

What is declarative programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I keep hearing this term tossed around in several different contexts. What is it?
Declarative programming is when you write your code in such a way that it describes what you want to do, and not how you want to do it. It is left up to the compiler to figure out the how.
Examples of declarative programming languages are SQL and Prolog.
The other answers already do a fantastic job explaining what declarative programming is, so I'm just going to provide some examples of why that might be useful.
Context Independence
Declarative Programs are context-independent. Because they only declare what the ultimate goal is, but not the intermediary steps to reach that goal, the same program can be used in different contexts. This is hard to do with imperative programs, because they often depend on the context (e.g. hidden state).
Take yacc as an example. It's a parser generator aka. compiler compiler, an external declarative DSL for describing the grammar of a language, so that a parser for that language can automatically be generated from the description. Because of its context independence, you can do many different things with such a grammar:
Generate a C parser for that grammar (the original use case for yacc)
Generate a C++ parser for that grammar
Generate a Java parser for that grammar (using Jay)
Generate a C# parser for that grammar (using GPPG)
Generate a Ruby parser for that grammar (using Racc)
Generate a tree visualization for that grammar (using GraphViz)
simply do some pretty-printing, fancy-formatting and syntax highlighting of the yacc source file itself and include it in your Reference Manual as a syntactic specification of your language
And many more …
Optimization
Because you don't prescribe the computer which steps to take and in what order, it can rearrange your program much more freely, maybe even execute some tasks in parallel. A good example is a query planner and query optimizer for a SQL database. Most SQL databases allow you to display the query that they are actually executing vs. the query that you asked them to execute. Often, those queries look nothing like each other. The query planner takes things into account that you wouldn't even have dreamed of: rotational latency of the disk platter, for example or the fact that some completely different application for a completely different user just executed a similar query and the table that you are joining with and that you worked so hard to avoid loading is already in memory anyway.
There is an interesting trade-off here: the machine has to work harder to figure out how to do something than it would in an imperative language, but when it does figure it out, it has much more freedom and much more information for the optimization stage.
Loosely:
Declarative programming tends towards:-
Sets of declarations, or declarative statements, each of which has meaning (often in the problem domain) and may be understood independently and in isolation.
Imperative programming tends towards:-
Sequences of commands, each of which perform some action; but which may or may not have meaning in the problem domain.
As a result, an imperative style helps the reader to understand the mechanics of what the system is actually doing, but may give little insight into the problem that it is intended to solve. On the other hand, a declarative style helps the reader to understand the problem domain and the approach that the system takes towards the solution of the problem, but is less informative on the matter of mechanics.
Real programs (even ones written in languages that favor the ends of the spectrum, such as ProLog or C) tend to have both styles present to various degrees at various points, to satisfy the varying complexities and communication needs of the piece. One style is not superior to the other; they just serve different purposes, and, as with many things in life, moderation is key.
Here's an example.
In CSS (used to style HTML pages), if you want an image element to be 100 pixels high and 100 pixels wide, you simply "declare" that that's what you want as follows:
#myImageId {
height: 100px;
width: 100px;
}
You can consider CSS a declarative "style sheet" language.
The browser engine that reads and interprets this CSS is free to make the image appear this tall and this wide however it wants. Different browser engines (e.g., the engine for IE, the engine for Chrome) will implement this task differently.
Their unique implementations are, of course, NOT written in a declarative language but in a procedural one like Assembly, C, C++, Java, JavaScript, or Python. That code is a bunch of steps to be carried out step by step (and might include function calls). It might do things like interpolate pixel values, and render on the screen.
I am sorry, but I must disagree with many of the other answers. I would like to stop this muddled misunderstanding of the definition of declarative programming.
Definition
Referential transparency (RT) of the sub-expressions is the only required attribute of a declarative programming expression, because it is the only attribute which is not shared with imperative programming.
Other cited attributes of declarative programming, derive from this RT. Please click the hyperlink above for the detailed explanation.
Spreadsheet example
Two answers mentioned spreadsheet programming. In the cases where the spreadsheet programming (a.k.a. formulas) does not access mutable global state, then it is declarative programming. This is because the mutable cell values are the monolithic input and output of the main() (the entire program). The new values are not written to the cells after each formula is executed, thus they are not mutable for the life of the declarative program (execution of all the formulas in the spreadsheet). Thus relative to each other, the formulas view these mutable cells as immutable. An RT function is allowed to access immutable global state (and also mutable local state).
Thus the ability to mutate the values in the cells when the program terminates (as an output from main()), does not make them mutable stored values in the context of the rules. The key distinction is the cell values are not updated after each spreadsheet formula is performed, thus the order of performing the formulas does not matter. The cell values are updated after all the declarative formulas have been performed.
Declarative programming is the picture, where imperative programming is instructions for painting that picture.
You're writing in a declarative style if you're "Telling it what it is", rather than describing the steps the computer should take to get to where you want it.
When you use XML to mark-up data, you're using declarative programming because you're saying "This is a person, that is a birthday, and over there is a street address".
Some examples of where declarative and imperative programming get combined for greater effect:
Windows Presentation Foundation uses declarative XML syntax to describe what a user interface looks like, and what the relationships (bindings) are between controls and underlying data structures.
Structured configuration files use declarative syntax (as simple as "key=value" pairs) to identify what a string or value of data means.
HTML marks up text with tags that describe what role each piece of text has in relation to the whole document.
Declarative Programming is programming with declarations, i.e. declarative sentences. Declarative sentences have a number of properties that distinguish them from imperative sentences. In particular, declarations are:
commutative (can be reordered)
associative (can be regrouped)
idempotent (can repeat without change in meaning)
monotonic (declarations don't subtract information)
A relevant point is that these are all structural properties and are orthogonal to subject matter. Declarative is not about "What vs. How". We can declare (represent and constrain) a "how" just as easily as we declare a "what". Declarative is about structure, not content. Declarative programming has a significant impact on how we abstract and refactor our code, and how we modularize it into subprograms, but not so much on the domain model.
Often, we can convert from imperative to declarative by adding context. E.g. from "Turn left. (... wait for it ...) Turn Right." to "Bob will turn left at intersection of Foo and Bar at 11:01. Bob will turn right at the intersection of Bar and Baz at 11:06." Note that in the latter case the sentences are idempotent and commutative, whereas in the former case rearranging or repeating the sentences would severely change the meaning of the program.
Regarding monotonic, declarations can add constraints which subtract possibilities. But constraints still add information (more precisely, constraints are information). If we need time-varying declarations, it is typical to model this with explicit temporal semantics - e.g. from "the ball is flat" to "the ball is flat at time T". If we have two contradictory declarations, we have an inconsistent declarative system, though this might be resolved by introducing soft constraints (priorities, probabilities, etc.) or leveraging a paraconsistent logic.
Describing to a computer what you want, not how to do something.
imagine an excel page. With columns populated with formulas to calculate you tax return.
All the logic is done declared in the cells, the order of the calculation is by determine by formula itself rather than procedurally.
That is sort of what declarative programming is all about. You declare the problem space and the solution rather than the flow of the program.
Prolog is the only declarative language I've use. It requires a different kind of thinking but it's good to learn if just to expose you to something other than the typical procedural programming language.
I have refined my understanding of declarative programming, since Dec 2011 when I provided an answer to this question. Here follows my current understanding.
The long version of my understanding (research) is detailed at this link, which you should read to gain a deep understanding of the summary I will provide below.
Imperative programming is where mutable state is stored and read, thus the ordering and/or duplication of program instructions can alter the behavior (semantics) of the program (and even cause a bug, i.e. unintended behavior).
In the most naive and extreme sense (which I asserted in my prior answer), declarative programming (DP) is avoiding all stored mutable state, thus the ordering and/or duplication of program instructions can NOT alter the behavior (semantics) of the program.
However, such an extreme definition would not be very useful in the real world, since nearly every program involves stored mutable state. The spreadsheet example conforms to this extreme definition of DP, because the entire program code is run to completion with one static copy of the input state, before the new states are stored. Then if any state is changed, this is repeated. But most real world programs can't be limited to such a monolithic model of state changes.
A more useful definition of DP is that the ordering and/or duplication of programming instructions do not alter any opaque semantics. In other words, there are not hidden random changes in semantics occurring-- any changes in program instruction order and/or duplication cause only intended and transparent changes to the program's behavior.
The next step would be to talk about which programming models or paradigms aid in DP, but that is not the question here.
It's a method of programming based around describing what something should do or be instead of describing how it should work.
In other words, you don't write algorithms made of expressions, you just layout how you want things to be. Two good examples are HTML and WPF.
This Wikipedia article is a good overview: http://en.wikipedia.org/wiki/Declarative_programming
Since I wrote my prior answer, I have formulated a new definition of the declarative property which is quoted below. I have also defined imperative programming as the dual property.
This definition is superior to the one I provided in my prior answer, because it is succinct and it is more general. But it may be more difficult to grok, because the implication of the incompleteness theorems applicable to programming and life in general are difficult for humans to wrap their mind around.
The quoted explanation of the definition discusses the role pure functional programming plays in declarative programming.
Declarative vs. Imperative
The declarative property is weird, obtuse, and difficult to capture in a technically precise definition that remains general and not ambiguous, because it is a naive notion that we can declare the meaning (a.k.a semantics) of the program without incurring unintended side effects. There is an inherent tension between expression of meaning and avoidance of unintended effects, and this tension actually derives from the incompleteness theorems of programming and our universe.
It is oversimplification, technically imprecise, and often ambiguous to define declarative as “what to do” and imperative as “how to do”. An ambiguous case is the “what” is the “how” in a program that outputs a program— a compiler.
Evidently the unbounded recursion that makes a language Turing complete, is also analogously in the semantics— not only in the syntactical structure of evaluation (a.k.a. operational semantics). This is logically an example analogous to Gödel's theorem— “any complete system of axioms is also inconsistent”. Ponder the contradictory weirdness of that quote! It is also an example that demonstrates how the expression of semantics does not have a provable bound, thus we can't prove2 that a program (and analogously its semantics) halt a.k.a. the Halting theorem.
The incompleteness theorems derive from the fundamental nature of our universe, which as stated in the Second Law of Thermodynamics is “the entropy (a.k.a. the # of independent possibilities) is trending to maximum forever”. The coding and design of a program is never finished— it's alive!— because it attempts to address a real world need, and the semantics of the real world are always changing and trending to more possibilities. Humans never stop discovering new things (including errors in programs ;-).
To precisely and technically capture this aforementioned desired notion within this weird universe that has no edge (ponder that! there is no “outside” of our universe), requires a terse but deceptively-not-simple definition which will sound incorrect until it is explained deeply.
Definition:
The declarative property is where there can exist only one possible set of statements that can express each specific modular semantic.
The imperative property3 is the dual, where semantics are inconsistent under composition and/or can be expressed with variations of sets of statements.
This definition of declarative is distinctively local in semantic scope, meaning that it requires that a modular semantic maintain its consistent meaning regardless where and how it's instantiated and employed in global scope. Thus each declarative modular semantic should be intrinsically orthogonal to all possible others— and not an impossible (due to incompleteness theorems) global algorithm or model for witnessing consistency, which is also the point of “More Is Not Always Better” by Robert Harper, Professor of Computer Science at Carnegie Mellon University, one of the designers of Standard ML.
Examples of these modular declarative semantics include category theory functors e.g. the Applicative, nominal typing, namespaces, named fields, and w.r.t. to operational level of semantics then pure functional programming.
Thus well designed declarative languages can more clearly express meaning, albeit with some loss of generality in what can be expressed, yet a gain in what can be expressed with intrinsic consistency.
An example of the aforementioned definition is the set of formulas in the cells of a spreadsheet program— which are not expected to give the same meaning when moved to different column and row cells, i.e. cell identifiers changed. The cell identifiers are part of and not superfluous to the intended meaning. So each spreadsheet result is unique w.r.t. to the cell identifiers in a set of formulas. The consistent modular semantic in this case is use of cell identifiers as the input and output of pure functions for cells formulas (see below).
Hyper Text Markup Language a.k.a. HTML— the language for static web pages— is an example of a highly (but not perfectly3) declarative language that (at least before HTML 5) had no capability to express dynamic behavior. HTML is perhaps the easiest language to learn. For dynamic behavior, an imperative scripting language such as JavaScript was usually combined with HTML. HTML without JavaScript fits the declarative definition because each nominal type (i.e. the tags) maintains its consistent meaning under composition within the rules of the syntax.
A competing definition for declarative is the commutative and idempotent properties of the semantic statements, i.e. that statements can be reordered and duplicated without changing the meaning. For example, statements assigning values to named fields can be reordered and duplicated without changed the meaning of the program, if those names are modular w.r.t. to any implied order. Names sometimes imply an order, e.g. cell identifiers include their column and row position— moving a total on spreadsheet changes its meaning. Otherwise, these properties implicitly require global consistency of semantics. It is generally impossible to design the semantics of statements so they remain consistent if randomly ordered or duplicated, because order and duplication are intrinsic to semantics. For example, the statements “Foo exists” (or construction) and “Foo does not exist” (and destruction). If one considers random inconsistency endemical of the intended semantics, then one accepts this definition as general enough for the declarative property. In essence this definition is vacuous as a generalized definition because it attempts to make consistency orthogonal to semantics, i.e. to defy the fact that the universe of semantics is dynamically unbounded and can't be captured in a global coherence paradigm.
Requiring the commutative and idempotent properties for the (structural evaluation order of the) lower-level operational semantics converts operational semantics to a declarative localized modular semantic, e.g. pure functional programming (including recursion instead of imperative loops). Then the operational order of the implementation details do not impact (i.e. spread globally into) the consistency of the higher-level semantics. For example, the order of evaluation of (and theoretically also the duplication of) the spreadsheet formulas doesn't matter because the outputs are not copied to the inputs until after all outputs have been computed, i.e. analogous to pure functions.
C, Java, C++, C#, PHP, and JavaScript aren't particularly declarative.
Copute's syntax and Python's syntax are more declaratively coupled to
intended results, i.e. consistent syntactical semantics that eliminate the extraneous so one can readily
comprehend code after they've forgotten it. Copute and Haskell enforce
determinism of the operational semantics and encourage “don't repeat
yourself” (DRY), because they only allow the pure functional paradigm.
2 Even where we can prove the semantics of a program, e.g. with the language Coq, this is limited to the semantics that are expressed in the typing, and typing can never capture all of the semantics of a program— not even for languages that are not Turing complete, e.g. with HTML+CSS it is possible to express inconsistent combinations which thus have undefined semantics.
3 Many explanations incorrectly claim that only imperative programming has syntactically ordered statements. I clarified this confusion between imperative and functional programming. For example, the order of HTML statements does not reduce the consistency of their meaning.
Edit: I posted the following comment to Robert Harper's blog:
in functional programming ... the range of variation of a variable is a type
Depending on how one distinguishes functional from imperative
programming, your ‘assignable’ in an imperative program also may have
a type placing a bound on its variability.
The only non-muddled definition I currently appreciate for functional
programming is a) functions as first-class objects and types, b)
preference for recursion over loops, and/or c) pure functions— i.e.
those functions which do not impact the desired semantics of the
program when memoized (thus perfectly pure functional
programming doesn't exist in a general purpose denotational semantics
due to impacts of operational semantics, e.g. memory
allocation).
The idempotent property of a pure function means the function call on
its variables can be substituted by its value, which is not generally
the case for the arguments of an imperative procedure. Pure functions
seem to be declarative w.r.t. to the uncomposed state transitions
between the input and result types.
But the composition of pure functions does not maintain any such
consistency, because it is possible to model a side-effect (global
state) imperative process in a pure functional programming language,
e.g. Haskell's IOMonad and moreover it is entirely impossible to
prevent doing such in any Turing complete pure functional programming
language.
As I wrote in 2012 which seems to the similar consensus of
comments in your recent blog, that declarative programming is an
attempt to capture the notion that the intended semantics are never
opaque. Examples of opaque semantics are dependence on order,
dependence on erasure of higher-level semantics at the operational
semantics layer (e.g. casts are not conversions and reified generics
limit higher-level semantics), and dependence on variable values
which can not be checked (proved correct) by the programming language.
Thus I have concluded that only non-Turing complete languages can be
declarative.
Thus one unambiguous and distinct attribute of a declarative language
could be that its output can be proven to obey some enumerable set of
generative rules. For example, for any specific HTML program (ignoring
differences in the ways interpreters diverge) that is not scripted
(i.e. is not Turing complete) then its output variability can be
enumerable. Or more succinctly an HTML program is a pure function of
its variability. Ditto a spreadsheet program is a pure function of its
input variables.
So it seems to me that declarative languages are the antithesis of
unbounded recursion, i.e. per Gödel's second incompleteness
theorem self-referential theorems can't be proven.
Lesie Lamport wrote a fairytale about how Euclid might have
worked around Gödel's incompleteness theorems applied to math proofs
in the programming language context by to congruence between types and
logic (Curry-Howard correspondence, etc).
Declarative programming is "the act of programming in languages that conform to the mental model of the developer rather than the operational model of the machine".
The difference between declarative and imperative programming is well
illustrated by the problem of parsing structured data.
An imperative program would use mutually recursive functions to consume input
and generate data. A declarative program would express a grammar that defines
the structure of the data so that it can then be parsed.
The difference between these two approaches is that the declarative program
creates a new language that is more closely mapped to the mental model of the
problem than is its host language.
It may sound odd, but I'd add Excel (or any spreadsheet really) to the list of declarative systems. A good example of this is given here.
I'd explain it as DP is a way to express
A goal expression, the conditions for - what we are searching for. Is there one, maybe or many?
Some known facts
Rules that extend the know facts
...and where there is a deduct engine usually working with a unification algorithm to find the goals.
As far as I can tell, it started being used to describe programming systems like Prolog, because prolog is (supposedly) about declaring things in an abstract way.
It increasingly means very little, as it has the definition given by the users above. It should be clear that there is a gulf between the declarative programming of Haskell, as against the declarative programming of HTML.
A couple other examples of declarative programming:
ASP.Net markup for databinding. It just says "fill this grid with this source", for example, and leaves it to the system for how that happens.
Linq expressions
Declarative programming is nice because it can help simplify your mental model* of code, and because it might eventually be more scalable.
For example, let's say you have a function that does something to each element in an array or list. Traditional code would look like this:
foreach (object item in MyList)
{
DoSomething(item);
}
No big deal there. But what if you use the more-declarative syntax and instead define DoSomething() as an Action? Then you can say it this way:
MyList.ForEach(DoSometing);
This is, of course, more concise. But I'm sure you have more concerns than just saving two lines of code here and there. Performance, for example. The old way, processing had to be done in sequence. What if the .ForEach() method had a way for you to signal that it could handle the processing in parallel, automatically? Now all of a sudden you've made your code multi-threaded in a very safe way and only changed one line of code. And, in fact, there's a an extension for .Net that lets you do just that.
If you follow that link, it takes you to a blog post by a friend of mine. The whole post is a little long, but you can scroll down to the heading titled "The Problem" _and pick it up there no problem.*
It depends on how you submit the answer to the text. Overall you can look at the programme at a certain view but it depends what angle you look at the problem. I will get you started with the programme:
Dim Bus, Car, Time, Height As Integr
Again it depends on what the problem is an overall. You might have to shorten it due to the programme. Hope this helps and need the feedback if it does not.
Thank You.

Resources