Sorry for this english related question but I only came across that expression in the context of IT. What does abstracting over something mean ? For example abstracting over objects or abstracting over classes.
Thanks
In this context, the word "abstract" comes from the lambda calculus, where it means "to make something a parameter" (a value parameter or a type parameter). The word is used more generally with other kinds of parameters; for example, mechanisms for "generic programming" often include ways of abstracting over classes.
Probably the easiest language in which to abstract over objects and classes is Smalltalk, where everything (including every class) is an object. Smalltalk, like Ruby which is closely based on Smalltalk, has "duck typing", so for example you could "abstract over" any collection class by writing Smalltalk code that uses only methods common to all collection classes. You could abstract over collection objects in a similar way.
It means to pull it out for a function as an argument. It makes more sense in functional programming but imagine you have a function that takes an integer and adds five to it you could make that a variable and have a sum function that would work on any two integers.
That case is not so interesting. Now what if you pulled the addition operation up and made it an argument. Now you have a function that takes two arguments and applies calls the third as a function on them. Here you have abstracted the operation out of the function.
Sorry this is such a weak explanation, I will fix it later when I get a chance. It makes sence with a couple of nice examples, I just have to run ATM.
Related
I have an algebraic data type that's being used all thoughout my program. I've realized that there's a point when I need to annotate all my structures of that type with a simple string.
I would like to not go through tons of code to account for adding a field to my commonly used type. Especially since there's no meaningful value for this annotation until quite far in my program, it seems excessive to refactor a thousand lines of code for a trivial change.
Also, as the type is pretty complex, it seems silly to just duplicate the type and make a slightly different version.
Is this just a weakness of Haskell, or am I missing the right way to handle this? I assume it's the latter, but I can't find anything akin to optional parameters to a type constructor.
I wouldn't go as far as declaring "a weakness of Haskell" - more like something that's needed to take into account when dealing with ADTs (and not specifically in Haskell).
Each program consists of "entities" and "actions". Your problem is that you need to modify an entity (an ADT in Haskell).
In an OOP language the solution will be straightforward - subclass you type. But if you'll decide to change the signature of a method in an existing class - you'll have no choice but to go through your entire codebase and deal with the damage.
In a functional language, functions are "cheap" - each function is independent from the ADT and refactoring it is often simple. Is that a weakness of OOP languages? No - it's just a tradeoff between ease of modifying existing "actions" (which is easy in functional languages) and ease of modifying "entities" (which is easy in OOP languages).
And in a practical note:
Designing an ADT is very important since changing can be very expensive.
#bheklilr's advice is priceless - follow it!
I'm writing a small tool for generating php checks from javascript code, and I would like to know if anyone knows of a standard way of transforming functional code into imperative code?
I found this paper: Defunctionalization at Work it explains defunctionalization pretty well.
Lambdalifting and defunctionalization somewhat answered the question, but what about datastructures, we are still parsing lists as if they are all linkedlists. Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Here are a few additions to the list of #Artyom:
you can convert tail recursion into loops and assignments
linear types can be used to introduce assignments, e.g. y = f x can be replaced with x := f x if x is linear and has the same type as y
at least two kinds of defunctionalization are possible: Reynolds-type defunctionalization when you replace a high-order application with a switch full of first-order applications, and inlining (however, recursive functions is not always possible to inline)
Perhaps you are interested in removing some language elements (such as higher-order functions), right?
For eliminating HOFs from a program, there are techniques such as defunctionalization. For removing closures, you can use lambda-lifting (aka closure conversion). Is this something you are interested in?
I think you need to provide a concrete example of code you have, and the target code you intend to produce, so that others may propose solutions.
Added:
Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Yes. Linked lists are represented with pointers in C++ (a structure "node" with two fields: one for the "payload", another for the "next" pointer; empty list is then represented as a NULL pointer, but sometimes people prefer to use special "sentinel values"). Note that, if the code in the source language does not rely on the representation of singly linked lists (in the source language implementation), you can also implement the "cons"/"nil" operations using a vector in the target language (not sure if this suits your needs, though). The idea here is to give an alternative implementations for the familiar operations.
No, there is not.
The reason is that there is no such concrete and well defined thing like functional code or imperative code.
Such transformations exist only for concrete instances of your abstraction: for example, there are transformations from Haskell code to LLVM bytecode, F# code to CLI bytecode or Frege code to Java code.
(I doubt if there is one from Javascript to PHP.)
Depends on what you need. The usual answer is "there is no such tool", because the result will not be usable. However look at this from this standpoint:
The set of Assembler instructions in a computer defines an imperative machine. Hence the compiler needs to do such a translation. However I assume you do not want to have assembler code but something more readable.
Usually these kinds of heavy program transformations are done manually, if one is interested in the result, or automatically if the result will never be looked at by a human.
What exactly is the difference? It seems like the terms can be used somewhat interchangeably, but reading the wikipedia entry for Objective-c, I came across:
In addition to C’s style of procedural
programming, C++ directly supports
certain forms of object-oriented
programming, generic programming, and
metaprogramming.
in reference to C++. So apparently they're different?
Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data.
Metaprogramming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates programs.
Generic Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data, but makes only the minimum assumptions about the structure of the data, thus maximizing reuse across a wide range of datatypes.
As was already mentioned in several other answers, the distinction can be confusing in C++, since both Generic Programming and (static/compile time) Metaprogramming are done with Templates. To confuse you even further, Generic Programming in C++ actually uses Metaprogramming to be efficient, i.e. Template Specialization generates specialized (fast) programs from generic ones.
Also note that, as every Lisp programmer knows, code and data are the same thing, so there really is no such thing as "metaprogramming", it's all just programming. Again, this is a bit hard to see in C++, since you actually use two completely different programming languages for programming (C++, an imperative, procedural, object-oriented language in the C family) and metaprogramming (Templates, a purely functional "accidental" language somewhere in between pure lambda calculus and Haskell, with butt-ugly syntax, since it was never actually intended to be a programming language.)
Many other languages use the same language for both programming and metaprogramming (e.g. Lisp, Template Haskell, Converge, Smalltalk, Newspeak, Ruby, Ioke, Seph).
Metaprogramming, in a broad sense, means writing programs that yield other programs. E.g. like templates in C++ produce actual code only when instantiated. One can interpret a template as a program that takes a type as an input and produces an actual function/class as an output. Preprocessor is another kind of metaprogramming. Another made-up example of metaprogramming:a program that reads an XML and produces some SQL scripts according to the XML. Again, in general, a metaprogram is a program that yields another program, whereas generic programming is about parametrized(usually with other types) types(including functions) .
EDITED after considering the comments to this answer
I would roughly define metaprogramming as "writing programs to write programs" and generic programming as "using language features to write functions, classes, etc. parameterized on the data types of arguments or members".
By this standard, C++ templates are useful for both generic programming (think vector, list, sort...) and metaprogramming (think Boost and e.g. Spirit). Furthermore, I would argue that generic programming in C++ (i.e. compile-time polymorphism) is accomplished by metaprogramming (i.e. code generation from templated code).
Generic programming usually refers to functions that can work with many types. E.g. a sort function, which can sort a collection of comparables instead of one sort function to sort an array of ints and another one to sort a vector of strings.
Metaprogramming refers to inspecting, modifying or creating classes, modules or functions programmatically.
Its best to look at other languages, because in C++, a single feature supports both Generic Programming and Metaprogramming. (Templates are very powerful).
In Scheme / Lisp, you can change the grammar of your code. People probably know Scheme as "that prefix language with lots of parenthesis", but it also has very powerful metaprogramming techniques (Hygenic Macros). In particular, try / catch can be created, and even the grammar can be manipulated to a point (For example, here is a prefix to infix converter if you don't want to write prefix code anymore: http://github.com/marcomaggi/nausicaa). This is accomplished through metaprogramming, code that writes code that writes code. This is useful for experimenting with new paradigms of programming (the AMB operator plays an important role in non-deterministic programming. I hope AMB will become mainstream in the next 5 years or so...)
In Java / C#, you can have generic programming through generics. You can write one generic class that supports the types of many other classes. For instance, in Java, you can use Vector to create a Vector of Integers. Or Vector if you want it specific to your own class.
Where things get strange, is that C++ templates are designed for generic programming. However, because of a few tricks, C++ templates themselves are turing-complete. Using these tricks, it is possible to add new features to the C++ language through meta-programming. Its convoluted, but it works. Here's an example which adds multiple dispatch to C++ through templates. http://www.eptacom.net/pubblicazioni/pub_eng/mdisp.html . The more typical example is Fibonacci at compile time: http://blog.emptycrate.com/node/271
Generic programming is a very simple form of metacoding albeit not usually runtime. It's more like the preprocessor in C and relates more to template programming in most use cases and basic implementations.
You'll find often in typed languages that you'll create a few implementations of something where only the type if different. In languages such as Java this can be especially painful since every class and interface is defining a new type.
You can generate those classes by converting them to a string literal then replacing the class name with a variable to interpolate.
Where generics are used in runtime it's a bit different, in that case it's simply variable programming, programming using variables.
The way to envisage that is simple, take to files, compare them and turn anything different into a variable. Now you have only one file that is reusable. You only have to specify what's different, hence the name variable.
How generics came about it that not everything can be made variable like the variable type you expect or a cast type. Often there would by a lot of file duplication where the only thing that was variable was the variable types. This was a very common source of duplication. Although there are ways around it or to mitigate it they aren't particularly convenient. Generics have come along as a kind of variable variable to allow making the variable type variable. Because the variable type is something normally expressing in the programming language that can now be specified in runtime it is also considered metacoding, albeit a very simple case.
The effect of not having variability where you need it is to unroll your variables, that is you are forced instead of having a variable to make an implementation for every possible would be variable value.
As you can imagine that is quite expensive. This would be very common when using any kind of reusage object storage library. These will accept any object but in most cases people only want to sore one type of objdct. If you put in a Shop object which extends Object then want to retrieve it, the method signature on the storage object will return simply Object but your code will expect a Shop object. This will break compilation with the downgrade of the object unless you cast it back up to shop. This raises another conundrum as without generics there is no way to check for compatibility and ensure the object you are storing is a Shop class.
Java avoids metaprogramming and tries to keep the language simple using OOP principles of polymorphism instead to make flexible code. However there are some pressing and reoccurring problems that through experience have presented themselves and are addressed with the addition of minimal metaprogramming facilities. Java does not want to be a metaprogramming language but sparingly imports concepts from there to solve the most nagging problems.
Programming languages that offer lavage metacoding facilities can be significantly more productive than languages than avoid it barring special cases, reflection, OOP polymorphism, etc. However it often also takes a lot more skill and expertise to generate un=nderstandable, maintaiable and bug free code. There is also often a performance penalty for such languages with C++ being a bit of an oddball because it is compiled to native.
First-class value can be
passed as an argument
returned from a subroutine
assigned into a variable.
Second-class value just can be passed as an argument.
Third-class value even can't be passed as an argument.
Why should these things defined like that? As I understand, "can be passed as an argument" means it can be pushed into the runtime stack;"can be assigned into a variable" means it can be moved into a different location of the memory; "can be returned from a subroutine" almost has the same meaning of "can be assigned into a variable" since the returned value always be put into a known address, so first class value is totally "movable" or "dynamic",second class value is half "movable" , and third class value is just "static", such as labels in C/C++ which just can be addressed by goto statement, and you can't do nothing with that address except "goto" .Does My understanding make any sense? or what do these three kinds of values mean exactly?
Oh no, I may have to go edit Wikipedia again.
There are really only two distinctions worth making: first-class and not first-class. If Michael Scott talks about a third-class anything, I'll be very depressed.
Ok, so what is "first-class," anyway? Well, it is a term that barely has a technical meaning. The meaning, when present, is usually comparative, and it applies to a thing in a language (I'm being deliberately vague here) that has more privileges than a comparable thing. That's all people mean by it.
Let's look at some examples:
Function pointers in C are first-class values because they can be passed to functions, returned from functions, and stored in heap-allocated data structures just like any other value. Functions in Pascal and Ada are not first-class values because although they can be passed as arguments, they cannot be returned as results or stored in heap-allocated data structures.
Struct types are second-class types in C, because there are no literal expressions of struct type. (Since C99 there are literal initializers with named fields, but this is still not as general as having a literal anywhere you can use an expression.)
Polymorphic values are second-class values in ML because although they can be let-bound to names, they cannot be lambda-bound. Therefore they cannot be passed as arguments. But in Haskell, because Haskell supports higher-rank polymorphism, polymorphic values are first-class. (They can even be stored in data structures!)
In Java, the type int is second class because you can't inherit from it. Type Integer is first class.
In C, labels are second class, because they don't have values and you can't compute with them. In FORTRAN, line numbers have values and so are first class. There is a GNU extension to C that allows you to define first-class labels, and it is jolly useful. What does first-class mean in this case? It means the labels have values, can be stored in data structures, and can be used in goto. But those values are second class in another sense, because a label from one procedure can't meaningfully be used in a goto that belongs to another procedure.
Are we getting an idea how useless this terminology is?
I hope these examples convince you that the idea of "first-class" is not a very useful idea in thinking about programming languages overall. When you're talking about a particular feature of a particular language or language family, it can be a useful shorthand ("a language isn't functional unless it has first-class, nested functions") but by and large you're better off saying just what you mean instead of talking about "first-class" or "not first-class" things.
As for "third class", just say no.
Something is first-class if it is explicitly manipulable in the code. In other words, something is first-class if it can be programmatically manipulated at run-time.
This closely relates to meta-programming in the sense that what you describe in the code (at development time) is one meta-level, and what exists at run-time is another meta-level. But the barrier between these two meta-levels can be blurred, for instance with reflection. When something is reified at run-time, it becomes explicitly manipulable.
We speak of first-class object, because objects can be manipulated programmatically at run-time (that's the very purpose).
In java, you have classes, but they are not first-class, because the code can normally not manipulate a class unless you use reflection. But in Smalltalk, classes are first-class: the code can manipulate a class like an regular object.
In java, you have packages (modules), but they are not first-class, because the code does not manipulate package at run-time. But in NewSpeak, packages (modules) are first-class, you can instantiate a module and pass it to another module to specify the modularity at run-time.
In C#, you have closures which are first-class functions. They exist and can be manipulated at run-time programmatically. Such things does not exists (yet) in java.
To me, the boundary first-class/not first-class is not exactly strict. It is sometimes hard to pronounce for some language constructs, e.g. java primitive types. We could say it's not first-class because it's not an object and is not manipulable through a reference that can be passed along, but the primitive value does still exists and can be manipulated at run-time.
PS: I agree with Norman Ramsey and 2nd-class and 3rd-class value make no sense to me.
First-class: A first-class construct is one which is an intrinsic element of a language. The following properties must hold.
It must form part of the lexical syntax of the language
It may have operators applied to it
It must be referenceable (for example stored in a variable)
Second-class: A second-class construct is one which is an intrinsic element of the language with the following properties.
It must form part of the lexical syntax of the language
It may have operators applied to it
Third-class: A third-class construct is one which forms part of the syntax of a language.
in
Roger Keays and Andry Rakotonirainy. Context-oriented programming. In Pro- ceedings of the 3rd ACM International Workshop on Data Engineering for Wire- less and Mobile Access, MobiDe ’03, pages 9–16, New York, NY, USA, 2003. ACM.
Those terms are very broad and not really globally well defined, but here are the most logical definitions for them:
First-class values are the ones that have actual, tangible values, and so can be operated on and go around, as variables, arguments, return values or whatever.
This doesn't really need a thorough example, does it? In C, an int is first-class.
Second-class values are more limited. They have values, but they can't be used directly, so the compiler deliberately limits what you can do with it. You can reference them, so you can still have a first-class value representing them.
For example, in C, a function is a second-class value. It can't be altered, but it can be called and referenced.
Third-class values are even more limited. They not only don't have values, but interaction is completely absent, and often it only exists to be used as compile-time attributes.
For example, in Rust, a lifetime is a third-class value. You can't use the lifetime at all. You can only receive it as a template parameter, you can only use it as a template parameter (only when creating a new variable), and that's all you can do with it.
Another example, in C++, a struct or a class is a third-class value. This doesn't need much explanation.
Suppose you have this pseudo-code
do_something();
function do_something(){
print "I am saying hello.";
}
Why do some programming languages require the call to do_something() to appear below the function declaration in order for the code to run?
Programming languages use a symbol table to hold the various classes, functions, etc. that are used in the source code. Some languages compile in a single pass, whereby the symbols are pulled out of the symbol table as soon as they are used. Others use two passes, where the first pass is used to populate the table, and then the second is used to find the entries.
Most languages with a static type system are designed to require definition before use, which means there must be some sort of declaration of a function before the call so that the call can be checked (e.g., is the function getting the right number and types of arguments). This sort of design helps both a person and a compiler reading the program: everything you see has already been defined. The ease of reading and the popularity of one-pass compilers may explain the popularity of this design rule.
Unfortunately definition before use does not play well with mutual recursion, and so language designers resorted to an ugly hack whereby you have
Declaration (sometimes called a "forward declaration" from the keyword in Pascal)
Use
Definition
You see the same phenomenon at the type level in C in the form of the "incomplete struct declaration."
Around 1990 some language designers figured out that the one-pass compiler with no abstract-syntax tree should be a thing of the past, and two very nice designs from that era—Modula-3 and Haskell got rid of definition before use: in those languages, any defined function or variable is visible throughout its scope, including parts of the program textually before the definition. In other words, mutual recursion is the default for both types and functions. Good on them, I say—these languages have no ugly and unnecessary forward declarations.
Why [have definition before use]?
Easy to write a one-pass compiler in 1975.
without definition before use, you have to think harder about mutual recursion, especially mutually recursive type definitions.
Some people think it makes it easier for a person to read the code.