In his article "Why Functional Programming Matters," John Hughes argues that "Lazy evaluation is perhaps the most powerful tool for modularization in the functional programmer's repertoire." To do so, he provides an example like this:
Suppose you have two functions, "infiniteLoop" and "terminationCondition." You can do the following:
terminationCondition(infiniteLoop input)
Lazy evaluation, in Hughes' words "allows termination conditions to be separated from loop bodies." This is definitely true, since "terminationCondition" using lazy evaluation here means this condition can be defined outside the loop -- infiniteLoop will stop executing when terminationCondition stops asking for data.
But couldn't higher-order functions achieve the same thing as follows?
infiniteLoop(input, terminationCondition)
How does lazy evaluation provide modularization here that's not provided by higher-order functions?
Yes you could use a passed in termination check, but for that to work the author of infiniteLoop would have had to forsee the possibility of wanting to terminate the loop with that sort of condition, and hardwire a call to the termination condition into their function.
And even if the specific condition can be passed in as a function, the "shape" of it is predetermined by the author of infiniteLoop. What if they give me a termination condition "slot" that is called on each element, but I need access to the last several elements to check some sort of convergence condition? Maybe for a simple sequence generator you could come up with "the most general possible" termination condition type, but it's not obvious how to do so and remain efficient and easy to use. Do I repeatedly pass the entire sequence so far into the termination condition, in case that's what it's checking? Do I force my callers to wrap their simple termination conditions up in a more complicated package so they fit the most general condition type?
The callers certainly have to know exactly how the termination condition is called in order to supply a correct condition. That could be quite a bit of dependence on this specific implementation. If they switch to a different implementation of infiniteLoop written by another third party, how likely is it that exactly the same design for the termination condition would be used? With a lazy infiniteLoop, I can drop in any implementation that is supposed to produce the same sequence.
And what if infiniteLoop isn't a simple sequence generator, but actually generates a more complex infinite data structure, like a tree? If all the branches of the tree are independently recursively generated (think of a move tree for a game like chess) it could make sense to cut different branches at different depths, based on all sorts of conditions on the information generated thus far.
If the original author didn't prepare (either specifically for my use case or for a sufficiently general class of use cases), I'm out of luck. The author of the lazy infiniteLoop can just write it the natural way, and let each individual caller lazily explore what they want; neither has to know much about the other at all.
Furthermore, what if the decision to stop lazily exploring the infinite output is actually interleaved with (and dependent on) the computation the caller is doing with that output? Think of the chess move tree again; how far I want to explore one branch of the tree could easily depend on my evaluation of the best option I've found in other branches of the tree. So either I do my traversal and calculation twice (once in the termination condition to return a flag telling infinteLoop to stop, and then once again with the finite output so I can actually have my result), or the author of infiniteLoop had to prepare for not just a termination condition, but a complicated function that also gets to return output (so that I can push my entire computation inside the "termination condition").
Taken to extremes, I could explore the output and calculate some results, display them to a user and get input, and then continue exploring the data structure (without recalling infiniteLoop based on the user's input). The original author of the lazy infiniteLoop need have no idea that I would ever think of doing such a thing, and it will still work. If we've got purity enforced by the type system, then that would be impossible with the passed-in termination condition approach unless the whole infiniteLoop was allowed to have side effects if the termination condition needs to (say by giving the whole thing a monadic interface).
In short, to allow the same flexibility you'd get with lazy evaluation by using a strict infiniteLoop that takes higher order functions to control it can be a large amount of extra complexity for both the author of infiniteLoop and its caller (unless a variety of simpler wrappers are exposed, and one of them matches the caller's use case). Lazy evaluation can allow producers and consumers to be almost completely decoupled, while still giving the consumer the ability to control how much output the producer generates. Everything you can do that way you can do with extra function arguments as you say, but it requires to the producer and consumer to essentially agree on a protocol for how the control functions work; and that protocol is almost always either specialised to the use case at hand (tying the consumer and producer together) or so complicated in order to be fully-general that the producer and consumer are up tied to that protocol, which is unlikely to be recreated elsewhere, and so they're still tied together.
I know that memoization seems to be a perennial topic here on the haskell tag on stack overflow, but I think this question has not been asked before.
I'm aware of several different 'off the shelf' memoization libraries for Haskell:
The memo-combinators and memotrie packages, which make use of a beautiful trick involving lazy infinite data structures to achieve memoization in a purely functional way. (As I understand it, the former is slightly more flexible, while the latter is easier to use in simple cases: see this SO answer for discussion.)
The uglymemo package, which uses unsafePerformIO internally but still presents a referentially transparent interface. The use of unsafePerformIO internally results in better performance than the previous two packages. (Off the shelf, its implementation uses comparison-based search data structures, rather than perhaps-slightly-more-efficient hash functions; but I think that if you find and replace Cmp for Hashable and Data.Map for Data.HashMap and add the appropraite imports, you get a hash based version.)
However, I'm not aware of any library that looks answers up based on object identity rather than object value. This can be important, because sometimes the kinds of object which are being used as keys to your memo table (that is, as input to the function being memoized) can be large---so large that fully examining the object to determine whether you've seen it before is itself a slow operation. Slow, and also unnecessary, if you will be applying the memoized function again and again to an object which is stored at a given 'location in memory' 1. (This might happen, for example, if we're memoizing a function which is being called recursively over some large data structure with a lot of structural sharing.) If we've already computed our memoized function on that exact object before, we can already know the answer, even without looking at the object itself!
Implementing such a memoization library involves several subtle issues and doing it properly requires several special pieces of support from the language. Luckily, GHC provides all the special features that we need, and there is a paper by Peyton-Jones, Marlow and Elliott which basically worries about most of these issues for you, explaining how to build a solid implementation. They don't provide all details, but they get close.
The one detail which I can see which one probably ought to worry about, but which they don't worry about, is thread safety---their code is apparently not threadsafe at all.
My question is: does anyone know of a packaged library which does the kind of memoization discussed in the Peyton-Jones, Marlow and Elliott paper, filling in all the details (and preferably filling in proper thread-safety as well)?
Failing that, I guess I will have to code it up myself: does anyone have any ideas of other subtleties (beyond thread safety and the ones discussed in the paper) which the implementer of such a library would do well to bear in mind?
UPDATE
Following #luqui's suggestion below, here's a little more data on the exact problem I face. Let's suppose there's a type:
data Node = Node [Node] [Annotation]
This type can be used to represent a simple kind of rooted DAG in memory, where Nodes are DAG Nodes, the root is just a distinguished Node, and each node is annotated with some Annotations whose internal structure, I think, need not concern us (but if it matters, just ask and I'll be more specific.) If used in this way, note that there may well be significant structural sharing between Nodes in memory---there may be exponentially more paths which lead from the root to a node than there are nodes themselves. I am given a data structure of this form, from an external library with which I must interface; I cannot change the data type.
I have a function
myTransform : Node -> Node
the details of which need not concern us (or at least I think so; but again I can be more specific if needed). It maps nodes to nodes, examining the annotations of the node it is given, and the annotations its immediate children, to come up with a new Node with the same children but possibly different annotations. I wish to write a function
recursiveTransform : Node -> Node
whose output 'looks the same' as the data structure as you would get by doing:
recursiveTransform Node originalChildren annotations =
myTransform Node recursivelyTransformedChildren annotations
where
recursivelyTransformedChildren = map recursiveTransform originalChildren
except that it uses structural sharing in the obvious way so that it doesn't return an exponential data structure, but rather one on the order of the same size as its input.
I appreciate that this would all be easier if say, the Nodes were numbered before I got them, or I could otherwise change the definition of a Node. I can't (easily) do either of these things.
I am also interested in the general question of the existence of a library implementing the functionality I mention quite independently of the particular concrete problem I face right now: I feel like I've had to work around this kind of issue on a few occasions, and it would be nice to slay the dragon once and for all. The fact that SPJ et al felt that it was worth adding not one but three features to GHC to support the existence of libraries of this form suggests that the feature is genuinely useful and can't be worked around in all cases. (BUT I'd still also be very interested in hearing about workarounds which will help in this particular case too: the long term problem is not as urgent as the problem I face right now :-) )
1 Technically, I don't quite mean location in memory, since the garbage collector sometimes moves objects around a bit---what I really mean is 'object identity'. But we can think of this as being roughly the same as our intuitive idea of location in memory.
If you only want to memoize based on object identity, and not equality, you can just use the existing laziness mechanisms built into the language.
For example, if you have a data structure like this
data Foo = Foo { ... }
expensive :: Foo -> Bar
then you can just add the value to be memoized as an extra field and let the laziness take care of the rest for you.
data Foo = Foo { ..., memo :: Bar }
To make it easier to use, add a smart constructor to tie the knot.
makeFoo ... = let foo = Foo { ..., memo = expensive foo } in foo
Though this is somewhat less elegant than using a library, and requires modification of the data type to really be useful, it's a very simple technique and all thread-safety issues are already taken care of for you.
It seems that stable-memo would be just what you needed (although I'm not sure if it can handle multiple threads):
Whereas most memo combinators memoize based on equality, stable-memo does it based on whether the exact same argument has been passed to the function before (that is, is the same argument in memory).
stable-memo only evaluates keys to WHNF.
This can be more suitable for recursive functions over graphs with cycles.
stable-memo doesn't retain the keys it has seen so far, which allows them to be garbage collected if they will no longer be used. Finalizers are put in place to remove the corresponding entries from the memo table if this happens.
Data.StableMemo.Weak provides an alternative set of combinators that also avoid retaining the results of the function, only reusing results if they have not yet been garbage collected.
There is no type class constraint on the function's argument.
stable-memo will not work for arguments which happen to have the same value but are not the same heap object. This rules out many candidates for memoization, such as the most common example, the naive Fibonacci implementation whose domain is machine Ints; it can still be made to work for some domains, though, such as the lazy naturals.
Ekmett just uploaded a library that handles this and more (produced at HacPhi): http://hackage.haskell.org/package/intern. He assures me that it is thread safe.
Edit: Actually, strictly speaking I realize this does something rather different. But I think you can use it for your purposes. It's really more of a stringtable-atom type interning library that works over arbitrary data structures (including recursive ones). It uses WeakPtrs internally to maintain the table. However, it uses Ints to index the values to avoid structural equality checks, which means packing them into the data type, when what you want are apparently actually StableNames. So I realize this answers a related question, but requires modifying your data type, which you want to avoid...
I have an upcoming project in which a core requirement will be to mutate the way a method works at runtime. Note that I'm not talking about a higher level OO concept like "shadow one method with another", although the practical effect would be similar.
The key properties I'm after are:
I must be able to modify the method in such a way that I can add new expressions, remove existing expressions, or modify any of the expressions that take place in it.
After modifying the method, subsequent calls to that method would invoke the new sequence of operations. (Or, if the language binds methods rather than evaluating every single time, provide me a way to unbind/rebind the new method.)
Ideally, I would like to manipulate the atomic units of the language (e.g., "invoke method foo on object bar") and not the assembly directly (e.g. "pop these three parameters onto the stack"). In other words, I'd like to be able to have high confidence that the operations I construct are semantically meaningful in the language. But I'll take what I can get.
If you're not sure if a candidate language meets these criteria, here's a simple litmus test:
Can you write another method called clean which:
accepts a method m as input
returns another method m2 that performs the same operations as m
such that m2 is identical to m, but doesn't contain any calls to the print-to-standard-out method in your language (puts, System.Console.WriteLn, println, etc.)?
I'd like to do some preliminary research now and figure out what the strongest candidates are. Having a large, active community is as important to me as the practicality of implementing what I want to do. I am aware that there may be some unforged territory here, since manipulating bytecode directly is not typically an operation that needs to be exposed.
What are the choices available to me? If possible, can you provide a toy example in one or more of the languages that you recommend, or point me to a recent example?
Update: The reason I'm after this is that I'd like to write a program which is capable of modifying itself at runtime in response to new information. This modification goes beyond mere parameters or configurable data, but full-fledged, evolved changes in behavior. (No, I'm not writing a virus. ;) )
Well, you could always use .NET and the Expression libraries to build up expressions. That I think is really your best bet as you can build up representations of commands in memory and there is good library support for manipulating, traversing, etc.
Well, those languages with really strong macro support (in particular Lisps) could qualify.
But are you sure you actually need to go this deeply? I don't know what you're trying to do, but I suppose you could emulate it without actually getting too deeply into metaprogramming. Say, instead of using a method and manipulating it, use a collection of functions (with some way of sharing state, e.g. an object holding state passed to each).
I would say Groovy can do this.
For example
class Foo {
void bar() {
println "foobar"
}
}
Foo.metaClass.bar = {->
prinltn "barfoo"
}
Or a specific instance of foo without effecting other instances
fooInstance.metaClass.bar = {->
println "instance barfoo"
}
Using this approach I can modify, remove or add expression from the method and Subsequent calls will use the new method. You can do quite a lot with the Groovy metaClass.
In java, many professional framework do so using the open source ASM framework.
Here is a list of all famous java apps and libs including ASM.
A few years ago BCEL was also very much used.
There are languages/environments that allows a real runtime modification - for example, Common Lisp, Smalltalk, Forth. Use one of them if you really know what you're doing. Otherwise you can simply employ an interpreter pattern for an evolving part of your code, it is possible (and trivial) with any OO or functional language.
I'm confronted with the task of implementing algorithms (mostly business logic style) expressed as flowcharts. I'm aware that flowcharts are not the best algorithm representation due to its spaghetti-code property (would this be a use-case for CPS?), but I'm stuck with the specification expressed as flowcharts.
Although I could transform the flowcharts into more appropriate equivalent representations before implementing them, that could make it harder to "recognize" the orginal flow-chart in the resulting implementation, so I was hoping there is some way to directly represent flowchart-algorithms as (maybe monadic) EDSLs in Haskell, so that the semblance to the original flowchart-specification would be (more) obvious.
One possible representation of flowcharts is by using a group of mutually tail-recursive functions, by translating "go to step X" into "evaluate function X with state S". For improved readability, you can combine into a single function both the action (an external function that changes the state) and the chain of if/else or pattern matching that helps determine what step to take next.
This is assuming, of course, that your flowcharts are to be hardcoded (as opposed to loaded at runtime from an external source).
Sounds like Arrows would fit exactly what you describe. Either do a visualization of arrows (should be quite simple) or generate/transform arrow code from flow-graphs if you must.
Assuming there's "global" state within the flowchart, then that makes sense to package up into a state monad. At least then, unlike how you're doing it now, each call doesn't need any parameters, so can be read as a) modify state, b) conditional on current state, jump.
Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I keep hearing this term tossed around in several different contexts. What is it?
Declarative programming is when you write your code in such a way that it describes what you want to do, and not how you want to do it. It is left up to the compiler to figure out the how.
Examples of declarative programming languages are SQL and Prolog.
The other answers already do a fantastic job explaining what declarative programming is, so I'm just going to provide some examples of why that might be useful.
Context Independence
Declarative Programs are context-independent. Because they only declare what the ultimate goal is, but not the intermediary steps to reach that goal, the same program can be used in different contexts. This is hard to do with imperative programs, because they often depend on the context (e.g. hidden state).
Take yacc as an example. It's a parser generator aka. compiler compiler, an external declarative DSL for describing the grammar of a language, so that a parser for that language can automatically be generated from the description. Because of its context independence, you can do many different things with such a grammar:
Generate a C parser for that grammar (the original use case for yacc)
Generate a C++ parser for that grammar
Generate a Java parser for that grammar (using Jay)
Generate a C# parser for that grammar (using GPPG)
Generate a Ruby parser for that grammar (using Racc)
Generate a tree visualization for that grammar (using GraphViz)
simply do some pretty-printing, fancy-formatting and syntax highlighting of the yacc source file itself and include it in your Reference Manual as a syntactic specification of your language
And many more …
Optimization
Because you don't prescribe the computer which steps to take and in what order, it can rearrange your program much more freely, maybe even execute some tasks in parallel. A good example is a query planner and query optimizer for a SQL database. Most SQL databases allow you to display the query that they are actually executing vs. the query that you asked them to execute. Often, those queries look nothing like each other. The query planner takes things into account that you wouldn't even have dreamed of: rotational latency of the disk platter, for example or the fact that some completely different application for a completely different user just executed a similar query and the table that you are joining with and that you worked so hard to avoid loading is already in memory anyway.
There is an interesting trade-off here: the machine has to work harder to figure out how to do something than it would in an imperative language, but when it does figure it out, it has much more freedom and much more information for the optimization stage.
Loosely:
Declarative programming tends towards:-
Sets of declarations, or declarative statements, each of which has meaning (often in the problem domain) and may be understood independently and in isolation.
Imperative programming tends towards:-
Sequences of commands, each of which perform some action; but which may or may not have meaning in the problem domain.
As a result, an imperative style helps the reader to understand the mechanics of what the system is actually doing, but may give little insight into the problem that it is intended to solve. On the other hand, a declarative style helps the reader to understand the problem domain and the approach that the system takes towards the solution of the problem, but is less informative on the matter of mechanics.
Real programs (even ones written in languages that favor the ends of the spectrum, such as ProLog or C) tend to have both styles present to various degrees at various points, to satisfy the varying complexities and communication needs of the piece. One style is not superior to the other; they just serve different purposes, and, as with many things in life, moderation is key.
Here's an example.
In CSS (used to style HTML pages), if you want an image element to be 100 pixels high and 100 pixels wide, you simply "declare" that that's what you want as follows:
#myImageId {
height: 100px;
width: 100px;
}
You can consider CSS a declarative "style sheet" language.
The browser engine that reads and interprets this CSS is free to make the image appear this tall and this wide however it wants. Different browser engines (e.g., the engine for IE, the engine for Chrome) will implement this task differently.
Their unique implementations are, of course, NOT written in a declarative language but in a procedural one like Assembly, C, C++, Java, JavaScript, or Python. That code is a bunch of steps to be carried out step by step (and might include function calls). It might do things like interpolate pixel values, and render on the screen.
I am sorry, but I must disagree with many of the other answers. I would like to stop this muddled misunderstanding of the definition of declarative programming.
Definition
Referential transparency (RT) of the sub-expressions is the only required attribute of a declarative programming expression, because it is the only attribute which is not shared with imperative programming.
Other cited attributes of declarative programming, derive from this RT. Please click the hyperlink above for the detailed explanation.
Spreadsheet example
Two answers mentioned spreadsheet programming. In the cases where the spreadsheet programming (a.k.a. formulas) does not access mutable global state, then it is declarative programming. This is because the mutable cell values are the monolithic input and output of the main() (the entire program). The new values are not written to the cells after each formula is executed, thus they are not mutable for the life of the declarative program (execution of all the formulas in the spreadsheet). Thus relative to each other, the formulas view these mutable cells as immutable. An RT function is allowed to access immutable global state (and also mutable local state).
Thus the ability to mutate the values in the cells when the program terminates (as an output from main()), does not make them mutable stored values in the context of the rules. The key distinction is the cell values are not updated after each spreadsheet formula is performed, thus the order of performing the formulas does not matter. The cell values are updated after all the declarative formulas have been performed.
Declarative programming is the picture, where imperative programming is instructions for painting that picture.
You're writing in a declarative style if you're "Telling it what it is", rather than describing the steps the computer should take to get to where you want it.
When you use XML to mark-up data, you're using declarative programming because you're saying "This is a person, that is a birthday, and over there is a street address".
Some examples of where declarative and imperative programming get combined for greater effect:
Windows Presentation Foundation uses declarative XML syntax to describe what a user interface looks like, and what the relationships (bindings) are between controls and underlying data structures.
Structured configuration files use declarative syntax (as simple as "key=value" pairs) to identify what a string or value of data means.
HTML marks up text with tags that describe what role each piece of text has in relation to the whole document.
Declarative Programming is programming with declarations, i.e. declarative sentences. Declarative sentences have a number of properties that distinguish them from imperative sentences. In particular, declarations are:
commutative (can be reordered)
associative (can be regrouped)
idempotent (can repeat without change in meaning)
monotonic (declarations don't subtract information)
A relevant point is that these are all structural properties and are orthogonal to subject matter. Declarative is not about "What vs. How". We can declare (represent and constrain) a "how" just as easily as we declare a "what". Declarative is about structure, not content. Declarative programming has a significant impact on how we abstract and refactor our code, and how we modularize it into subprograms, but not so much on the domain model.
Often, we can convert from imperative to declarative by adding context. E.g. from "Turn left. (... wait for it ...) Turn Right." to "Bob will turn left at intersection of Foo and Bar at 11:01. Bob will turn right at the intersection of Bar and Baz at 11:06." Note that in the latter case the sentences are idempotent and commutative, whereas in the former case rearranging or repeating the sentences would severely change the meaning of the program.
Regarding monotonic, declarations can add constraints which subtract possibilities. But constraints still add information (more precisely, constraints are information). If we need time-varying declarations, it is typical to model this with explicit temporal semantics - e.g. from "the ball is flat" to "the ball is flat at time T". If we have two contradictory declarations, we have an inconsistent declarative system, though this might be resolved by introducing soft constraints (priorities, probabilities, etc.) or leveraging a paraconsistent logic.
Describing to a computer what you want, not how to do something.
imagine an excel page. With columns populated with formulas to calculate you tax return.
All the logic is done declared in the cells, the order of the calculation is by determine by formula itself rather than procedurally.
That is sort of what declarative programming is all about. You declare the problem space and the solution rather than the flow of the program.
Prolog is the only declarative language I've use. It requires a different kind of thinking but it's good to learn if just to expose you to something other than the typical procedural programming language.
I have refined my understanding of declarative programming, since Dec 2011 when I provided an answer to this question. Here follows my current understanding.
The long version of my understanding (research) is detailed at this link, which you should read to gain a deep understanding of the summary I will provide below.
Imperative programming is where mutable state is stored and read, thus the ordering and/or duplication of program instructions can alter the behavior (semantics) of the program (and even cause a bug, i.e. unintended behavior).
In the most naive and extreme sense (which I asserted in my prior answer), declarative programming (DP) is avoiding all stored mutable state, thus the ordering and/or duplication of program instructions can NOT alter the behavior (semantics) of the program.
However, such an extreme definition would not be very useful in the real world, since nearly every program involves stored mutable state. The spreadsheet example conforms to this extreme definition of DP, because the entire program code is run to completion with one static copy of the input state, before the new states are stored. Then if any state is changed, this is repeated. But most real world programs can't be limited to such a monolithic model of state changes.
A more useful definition of DP is that the ordering and/or duplication of programming instructions do not alter any opaque semantics. In other words, there are not hidden random changes in semantics occurring-- any changes in program instruction order and/or duplication cause only intended and transparent changes to the program's behavior.
The next step would be to talk about which programming models or paradigms aid in DP, but that is not the question here.
It's a method of programming based around describing what something should do or be instead of describing how it should work.
In other words, you don't write algorithms made of expressions, you just layout how you want things to be. Two good examples are HTML and WPF.
This Wikipedia article is a good overview: http://en.wikipedia.org/wiki/Declarative_programming
Since I wrote my prior answer, I have formulated a new definition of the declarative property which is quoted below. I have also defined imperative programming as the dual property.
This definition is superior to the one I provided in my prior answer, because it is succinct and it is more general. But it may be more difficult to grok, because the implication of the incompleteness theorems applicable to programming and life in general are difficult for humans to wrap their mind around.
The quoted explanation of the definition discusses the role pure functional programming plays in declarative programming.
Declarative vs. Imperative
The declarative property is weird, obtuse, and difficult to capture in a technically precise definition that remains general and not ambiguous, because it is a naive notion that we can declare the meaning (a.k.a semantics) of the program without incurring unintended side effects. There is an inherent tension between expression of meaning and avoidance of unintended effects, and this tension actually derives from the incompleteness theorems of programming and our universe.
It is oversimplification, technically imprecise, and often ambiguous to define declarative as “what to do” and imperative as “how to do”. An ambiguous case is the “what” is the “how” in a program that outputs a program— a compiler.
Evidently the unbounded recursion that makes a language Turing complete, is also analogously in the semantics— not only in the syntactical structure of evaluation (a.k.a. operational semantics). This is logically an example analogous to Gödel's theorem— “any complete system of axioms is also inconsistent”. Ponder the contradictory weirdness of that quote! It is also an example that demonstrates how the expression of semantics does not have a provable bound, thus we can't prove2 that a program (and analogously its semantics) halt a.k.a. the Halting theorem.
The incompleteness theorems derive from the fundamental nature of our universe, which as stated in the Second Law of Thermodynamics is “the entropy (a.k.a. the # of independent possibilities) is trending to maximum forever”. The coding and design of a program is never finished— it's alive!— because it attempts to address a real world need, and the semantics of the real world are always changing and trending to more possibilities. Humans never stop discovering new things (including errors in programs ;-).
To precisely and technically capture this aforementioned desired notion within this weird universe that has no edge (ponder that! there is no “outside” of our universe), requires a terse but deceptively-not-simple definition which will sound incorrect until it is explained deeply.
Definition:
The declarative property is where there can exist only one possible set of statements that can express each specific modular semantic.
The imperative property3 is the dual, where semantics are inconsistent under composition and/or can be expressed with variations of sets of statements.
This definition of declarative is distinctively local in semantic scope, meaning that it requires that a modular semantic maintain its consistent meaning regardless where and how it's instantiated and employed in global scope. Thus each declarative modular semantic should be intrinsically orthogonal to all possible others— and not an impossible (due to incompleteness theorems) global algorithm or model for witnessing consistency, which is also the point of “More Is Not Always Better” by Robert Harper, Professor of Computer Science at Carnegie Mellon University, one of the designers of Standard ML.
Examples of these modular declarative semantics include category theory functors e.g. the Applicative, nominal typing, namespaces, named fields, and w.r.t. to operational level of semantics then pure functional programming.
Thus well designed declarative languages can more clearly express meaning, albeit with some loss of generality in what can be expressed, yet a gain in what can be expressed with intrinsic consistency.
An example of the aforementioned definition is the set of formulas in the cells of a spreadsheet program— which are not expected to give the same meaning when moved to different column and row cells, i.e. cell identifiers changed. The cell identifiers are part of and not superfluous to the intended meaning. So each spreadsheet result is unique w.r.t. to the cell identifiers in a set of formulas. The consistent modular semantic in this case is use of cell identifiers as the input and output of pure functions for cells formulas (see below).
Hyper Text Markup Language a.k.a. HTML— the language for static web pages— is an example of a highly (but not perfectly3) declarative language that (at least before HTML 5) had no capability to express dynamic behavior. HTML is perhaps the easiest language to learn. For dynamic behavior, an imperative scripting language such as JavaScript was usually combined with HTML. HTML without JavaScript fits the declarative definition because each nominal type (i.e. the tags) maintains its consistent meaning under composition within the rules of the syntax.
A competing definition for declarative is the commutative and idempotent properties of the semantic statements, i.e. that statements can be reordered and duplicated without changing the meaning. For example, statements assigning values to named fields can be reordered and duplicated without changed the meaning of the program, if those names are modular w.r.t. to any implied order. Names sometimes imply an order, e.g. cell identifiers include their column and row position— moving a total on spreadsheet changes its meaning. Otherwise, these properties implicitly require global consistency of semantics. It is generally impossible to design the semantics of statements so they remain consistent if randomly ordered or duplicated, because order and duplication are intrinsic to semantics. For example, the statements “Foo exists” (or construction) and “Foo does not exist” (and destruction). If one considers random inconsistency endemical of the intended semantics, then one accepts this definition as general enough for the declarative property. In essence this definition is vacuous as a generalized definition because it attempts to make consistency orthogonal to semantics, i.e. to defy the fact that the universe of semantics is dynamically unbounded and can't be captured in a global coherence paradigm.
Requiring the commutative and idempotent properties for the (structural evaluation order of the) lower-level operational semantics converts operational semantics to a declarative localized modular semantic, e.g. pure functional programming (including recursion instead of imperative loops). Then the operational order of the implementation details do not impact (i.e. spread globally into) the consistency of the higher-level semantics. For example, the order of evaluation of (and theoretically also the duplication of) the spreadsheet formulas doesn't matter because the outputs are not copied to the inputs until after all outputs have been computed, i.e. analogous to pure functions.
C, Java, C++, C#, PHP, and JavaScript aren't particularly declarative.
Copute's syntax and Python's syntax are more declaratively coupled to
intended results, i.e. consistent syntactical semantics that eliminate the extraneous so one can readily
comprehend code after they've forgotten it. Copute and Haskell enforce
determinism of the operational semantics and encourage “don't repeat
yourself” (DRY), because they only allow the pure functional paradigm.
2 Even where we can prove the semantics of a program, e.g. with the language Coq, this is limited to the semantics that are expressed in the typing, and typing can never capture all of the semantics of a program— not even for languages that are not Turing complete, e.g. with HTML+CSS it is possible to express inconsistent combinations which thus have undefined semantics.
3 Many explanations incorrectly claim that only imperative programming has syntactically ordered statements. I clarified this confusion between imperative and functional programming. For example, the order of HTML statements does not reduce the consistency of their meaning.
Edit: I posted the following comment to Robert Harper's blog:
in functional programming ... the range of variation of a variable is a type
Depending on how one distinguishes functional from imperative
programming, your ‘assignable’ in an imperative program also may have
a type placing a bound on its variability.
The only non-muddled definition I currently appreciate for functional
programming is a) functions as first-class objects and types, b)
preference for recursion over loops, and/or c) pure functions— i.e.
those functions which do not impact the desired semantics of the
program when memoized (thus perfectly pure functional
programming doesn't exist in a general purpose denotational semantics
due to impacts of operational semantics, e.g. memory
allocation).
The idempotent property of a pure function means the function call on
its variables can be substituted by its value, which is not generally
the case for the arguments of an imperative procedure. Pure functions
seem to be declarative w.r.t. to the uncomposed state transitions
between the input and result types.
But the composition of pure functions does not maintain any such
consistency, because it is possible to model a side-effect (global
state) imperative process in a pure functional programming language,
e.g. Haskell's IOMonad and moreover it is entirely impossible to
prevent doing such in any Turing complete pure functional programming
language.
As I wrote in 2012 which seems to the similar consensus of
comments in your recent blog, that declarative programming is an
attempt to capture the notion that the intended semantics are never
opaque. Examples of opaque semantics are dependence on order,
dependence on erasure of higher-level semantics at the operational
semantics layer (e.g. casts are not conversions and reified generics
limit higher-level semantics), and dependence on variable values
which can not be checked (proved correct) by the programming language.
Thus I have concluded that only non-Turing complete languages can be
declarative.
Thus one unambiguous and distinct attribute of a declarative language
could be that its output can be proven to obey some enumerable set of
generative rules. For example, for any specific HTML program (ignoring
differences in the ways interpreters diverge) that is not scripted
(i.e. is not Turing complete) then its output variability can be
enumerable. Or more succinctly an HTML program is a pure function of
its variability. Ditto a spreadsheet program is a pure function of its
input variables.
So it seems to me that declarative languages are the antithesis of
unbounded recursion, i.e. per Gödel's second incompleteness
theorem self-referential theorems can't be proven.
Lesie Lamport wrote a fairytale about how Euclid might have
worked around Gödel's incompleteness theorems applied to math proofs
in the programming language context by to congruence between types and
logic (Curry-Howard correspondence, etc).
Declarative programming is "the act of programming in languages that conform to the mental model of the developer rather than the operational model of the machine".
The difference between declarative and imperative programming is well
illustrated by the problem of parsing structured data.
An imperative program would use mutually recursive functions to consume input
and generate data. A declarative program would express a grammar that defines
the structure of the data so that it can then be parsed.
The difference between these two approaches is that the declarative program
creates a new language that is more closely mapped to the mental model of the
problem than is its host language.
It may sound odd, but I'd add Excel (or any spreadsheet really) to the list of declarative systems. A good example of this is given here.
I'd explain it as DP is a way to express
A goal expression, the conditions for - what we are searching for. Is there one, maybe or many?
Some known facts
Rules that extend the know facts
...and where there is a deduct engine usually working with a unification algorithm to find the goals.
As far as I can tell, it started being used to describe programming systems like Prolog, because prolog is (supposedly) about declaring things in an abstract way.
It increasingly means very little, as it has the definition given by the users above. It should be clear that there is a gulf between the declarative programming of Haskell, as against the declarative programming of HTML.
A couple other examples of declarative programming:
ASP.Net markup for databinding. It just says "fill this grid with this source", for example, and leaves it to the system for how that happens.
Linq expressions
Declarative programming is nice because it can help simplify your mental model* of code, and because it might eventually be more scalable.
For example, let's say you have a function that does something to each element in an array or list. Traditional code would look like this:
foreach (object item in MyList)
{
DoSomething(item);
}
No big deal there. But what if you use the more-declarative syntax and instead define DoSomething() as an Action? Then you can say it this way:
MyList.ForEach(DoSometing);
This is, of course, more concise. But I'm sure you have more concerns than just saving two lines of code here and there. Performance, for example. The old way, processing had to be done in sequence. What if the .ForEach() method had a way for you to signal that it could handle the processing in parallel, automatically? Now all of a sudden you've made your code multi-threaded in a very safe way and only changed one line of code. And, in fact, there's a an extension for .Net that lets you do just that.
If you follow that link, it takes you to a blog post by a friend of mine. The whole post is a little long, but you can scroll down to the heading titled "The Problem" _and pick it up there no problem.*
It depends on how you submit the answer to the text. Overall you can look at the programme at a certain view but it depends what angle you look at the problem. I will get you started with the programme:
Dim Bus, Car, Time, Height As Integr
Again it depends on what the problem is an overall. You might have to shorten it due to the programme. Hope this helps and need the feedback if it does not.
Thank You.