Why primitives are not reference type? - reference

I was reading about value and reference types and a question i couldn't find a clear answer was why primitives like int/double, etc are not reference types, like strings for example.
I know strings/arrays/other objects can be pretty big compared to ints (which i saw was the primary pro of reference), so the only reason not to make those primitives reference type would be because it would be an over-kill?

This is only the case in some programming languages, and this is typically done as an optimization (in order to avoid the need to perform memory dereferences or allocations for such simple types). However, there are languages that make basic numeric types and programmer-defined objects look and behave identically, sometimes selecting between a true object and a simple object automatically in the compiler or interpreter to maintain efficiency when the object-like capabilities are not used.
Python and Scala are examples where basic integers and regular objects are indistinguishable; Java, C++, and C are examples where builtin types are distinct from programmer-defined types.

Related

Is an object a data structure?

I know things like arrays, linked lists, etc are data structures...but what about objects?
Like say if I create an object say employee and it stores and keeps track of the employees name, salary, phone number...etc etc.
It's a question of semantics really, and I expect it could be argued either way.
When you create an object, you are telling a compiler, or an interpreter to store a group of information together. Usually the interpreter/compiler, will use some type of data structure to store that information (Python uses a hash table for example).
I might call that data structure the object if I pointed it out in a hex dump, but that's just because saying 'the bytes that represents the object' is a bit inconvenient.
You could (and maybe someone has) write a compiler that stores many objects in one data structure. In that case there would be no one to one mapping between object and data structure. So for that reason - I'm going to say no, an object is not a data structure, but it is normally stored in one.
Interestingly, before the OOP paradigm came into vogue, languages like C used struct's (structures). Which, in C is simply a contiguous block of virtual memory with named offsets (members). Although that does get more complex with things like unions, etc.
But essentially a structure, as #James pointed out, is exactly that - some collection or grouping of related things together in a way a programmer (or mad scientist) feels is logical.
In the modern programming lexicon, with languages such as Java, C#, etc - your Objects usually represent one real-world thing, such as a Customer, Order, < inset over-used object example here>, etc. While your data-structures (usually collections in the libraries of the languages I mentioned) represent the containment of multiple "Objects".
Strictly speaking, however (and just to confuse everyone), the data structures in languages like Java and C# are objects! (i.e. they are referenced passed around, can have methods called on them, etc.)
In a more classical (and perhaps more CS derived) sense, "data structures" are collections of behaviours (typically algorithms) that are used to manage the memory that stores data, while "objects" are that data.

Is there a standardized way to transform functional code to imperative code?

I'm writing a small tool for generating php checks from javascript code, and I would like to know if anyone knows of a standard way of transforming functional code into imperative code?
I found this paper: Defunctionalization at Work it explains defunctionalization pretty well.
Lambdalifting and defunctionalization somewhat answered the question, but what about datastructures, we are still parsing lists as if they are all linkedlists. Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Here are a few additions to the list of #Artyom:
you can convert tail recursion into loops and assignments
linear types can be used to introduce assignments, e.g. y = f x can be replaced with x := f x if x is linear and has the same type as y
at least two kinds of defunctionalization are possible: Reynolds-type defunctionalization when you replace a high-order application with a switch full of first-order applications, and inlining (however, recursive functions is not always possible to inline)
Perhaps you are interested in removing some language elements (such as higher-order functions), right?
For eliminating HOFs from a program, there are techniques such as defunctionalization. For removing closures, you can use lambda-lifting (aka closure conversion). Is this something you are interested in?
I think you need to provide a concrete example of code you have, and the target code you intend to produce, so that others may propose solutions.
Added:
Would there be a way of transforming the linkedlists of functional languages into other high-level datastructures like c++ vectors or java arraylists?
Yes. Linked lists are represented with pointers in C++ (a structure "node" with two fields: one for the "payload", another for the "next" pointer; empty list is then represented as a NULL pointer, but sometimes people prefer to use special "sentinel values"). Note that, if the code in the source language does not rely on the representation of singly linked lists (in the source language implementation), you can also implement the "cons"/"nil" operations using a vector in the target language (not sure if this suits your needs, though). The idea here is to give an alternative implementations for the familiar operations.
No, there is not.
The reason is that there is no such concrete and well defined thing like functional code or imperative code.
Such transformations exist only for concrete instances of your abstraction: for example, there are transformations from Haskell code to LLVM bytecode, F# code to CLI bytecode or Frege code to Java code.
(I doubt if there is one from Javascript to PHP.)
Depends on what you need. The usual answer is "there is no such tool", because the result will not be usable. However look at this from this standpoint:
The set of Assembler instructions in a computer defines an imperative machine. Hence the compiler needs to do such a translation. However I assume you do not want to have assembler code but something more readable.
Usually these kinds of heavy program transformations are done manually, if one is interested in the result, or automatically if the result will never be looked at by a human.

Scalar vs. primitive data type - are they the same thing?

In various articles I have read, there are sometimes references to primitive data types and sometimes there are references to scalars.
My understanding of each is that they are data types of something simple like an int, boolean, char, etc.
Is there something I am missing that means you should use particular terminology or are the terms simply interchangeable?
The Wikipedia pages for each one doesn't show anything obvious.
If the terms are simply interchangeable, which is the preferred one?
I don't think they're interchangeable. They are frequently similar, but differences do exist, and seems to mainly be in what they are contrasted with and what is relevant in context.
Scalars are typically contrasted with compounds, such as arrays, maps, sets, structs, etc. A scalar is a "single" value - integer, boolean, perhaps a string - while a compound is made up of multiple scalars (and possibly references to other compounds). "Scalar" is used in contexts where the relevant distinction is between single/simple/atomic values and compound values.
Primitive types, however, are contrasted with e.g. reference types, and are used when the relevant distinction is "Is this directly a value, or is it a reference to something that contains the real value?", as in Java's primitive types vs. references. I see this as a somewhat lower-level distinction than scalar/compound, but not quite.
It really depends on context (and frequently what language family is being discussed). To take one, possibly pathological, example: strings. In C, a string is a compound (an array of characters), while in Perl, a string is a scalar. In Java, a string is an object (or reference type). In Python, everything is (conceptually) an object/reference type, including strings (and numbers).
There's a lot of confusion and misuse of these terms. Often one is used to mean another. Here is what those terms actually mean.
"Native" refers to types that are built into to the language, as opposed to being provided by a library (even a standard library), regardless of how they're implemented. Perl strings are part of the Perl language, so they are native in Perl. C provides string semantics over pointers to chars using a library, so pointer to char is native, but strings are not.
"Atomic" refers to a type that can no longer be decomposed. It is the opposite of "composite". Composites can be decomposed into a combination of atomic values or other composites. Native integers and floating point numbers are atomic. Fractions, complex numbers, containers/collections, and strings are composite.
"Scalar" -- and this is the one that confuses most people -- refers to values that can express scale (hence the name), such as size, volume, counts, etc. Integers, floating point numbers, and fractions are scalars. Complex numbers, booleans, and strings are NOT scalars. Something that is atomic is not necessarily scalar and something that is scalar is not necessarily atomic. Scalars can be native or provided by libraries.
Some types have odd classifications. BigNumber types, usually implemented as an array of digits or integers, are scalars, but they're technically not atomic. They can appear to be atomic if the implementation is hidden and you can't access the internal components. But the components are only hidden, so the atomicity is an illusion. They're almost invariably provided in libraries, so they're not native, but they could be. In the Mathematica programming language, for example, big numbers are native and, since there's no way for a Mathematica program to decompose them into their building blocks, they're also atomic in that context, despite the fact that they're composites under the covers (where you're no longer in the world of the Mathematica language).
These definitions are independent of the language being used.
Put simply, it would appear that a 'scalar' type refers to a single item, as opposed to a composite or collection. So scalars include both primitive values as well as things like an enum value.
http://ee.hawaii.edu/~tep/EE160/Book/chap5/section2.1.3.html
Perhaps the 'scalar' term may be a throwback to C:
where scalars are primitive objects which contain a single value and are not composed of other C++ objects
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/1995/N0774.pdf
I'm curious about whether this refers to whether these items would have a value of 'scale'? - Such as counting numbers.
I like Scott Langeberg's answer because it is concise and backed by authoritative links. I would up-vote Scott's answer if I could.
I suppose that "primitive" data type could be considered primary data type so that secondary data types are derived from primary data types. The derivation is through combining, such as a C++ struct. A struct can be used to combine data types (such as and int and a char) to get a secondary data type. The struct-defined data type is always a secondary data type. Primary data types are not derived from anything, rather they are a given in the programming language.
I have a parallel to primitive being the nomenclature meaning primary. That parallel is "regular expression". I think the nomenclature "regular" can be understood as "regulating". Thus you have an expression that regulates the search.
Scalar etymology (http://www.etymonline.com/index.php?allowed_in_frame=0&search=scalar&searchmode=none) means ladder-like. I think the way this relates to programming is that a ladder has only one dimension: How many rungs from the end of the ladder. A scalar data type has only one dimension, thus represented by a single value.
I think in usage, primitive and scalar are interchangeable. Is there any example of a primitive that is not scalar, or of a scalar that is not primitive?
Although interchangeable, primitive refers to the data-type being a basic building block of other data types, and a primitive is not composed of other data types.
Scalar refers to its having a single value. Scalar contrasts with the mathematical vector. A vector is not represented by a single value because (using one kind of vector as an example) one value is needed to represent the vector's direction and another value needed to represent the vector's magnitude.
Reference links:
http://whatis.techtarget.com/definition/primitive
http://en.wikipedia.org/wiki/Primitive_data_type
Being scalar has nothing to do with the language, whereas being primitive is all dependent on the language. The two have nothing to do with each other.
A scalar data type is something that has a finite set of possible values, following some scale, i.e. each value can be compared to any other value as either equal, greater or less. Numeric values (floating point and integer) are the obvious examples, while discrete/enumerated values can also be considered scalar. In this regard, boolean is a scalar with 2 discrete possible values, and normally it makes sense that true > false. Strings, regardless of programming language, are technically not scalars.
Now what is primitive depends on the language. Every language classifies what its "basic types" are, and these are designated as its primitives. In JavaScript, string is primitive, despite it not being a scalar in the general sense. But in some languages a string is not primitive. To be a primitive type, the language must be able to treat it as immutable, and for this reason referential types such as objects, arrays, collections, cannot be primitive in most, if not all, languages.
In C, enumeration types, characters, and the various representations of integers form a more general type class called scalar types. Hence, the operations you can perform on values of any scalar type are the same as those for integers.
null type is the only thing that most realistically conforms to the definition of a "scalar type". Even the serialization of 'None' as 'N.' fitting into a 16bit word which is traditionally scalar -- or even a single bit which has multiple possible values -- isn't a "single data".
Every primitive is scalar, but not vice versa. DateTime is scalar, but not primitive.

Generic programming vs. Metaprogramming

What exactly is the difference? It seems like the terms can be used somewhat interchangeably, but reading the wikipedia entry for Objective-c, I came across:
In addition to Cā€™s style of procedural
programming, C++ directly supports
certain forms of object-oriented
programming, generic programming, and
metaprogramming.
in reference to C++. So apparently they're different?
Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data.
Metaprogramming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates programs.
Generic Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data, but makes only the minimum assumptions about the structure of the data, thus maximizing reuse across a wide range of datatypes.
As was already mentioned in several other answers, the distinction can be confusing in C++, since both Generic Programming and (static/compile time) Metaprogramming are done with Templates. To confuse you even further, Generic Programming in C++ actually uses Metaprogramming to be efficient, i.e. Template Specialization generates specialized (fast) programs from generic ones.
Also note that, as every Lisp programmer knows, code and data are the same thing, so there really is no such thing as "metaprogramming", it's all just programming. Again, this is a bit hard to see in C++, since you actually use two completely different programming languages for programming (C++, an imperative, procedural, object-oriented language in the C family) and metaprogramming (Templates, a purely functional "accidental" language somewhere in between pure lambda calculus and Haskell, with butt-ugly syntax, since it was never actually intended to be a programming language.)
Many other languages use the same language for both programming and metaprogramming (e.g. Lisp, Template Haskell, Converge, Smalltalk, Newspeak, Ruby, Ioke, Seph).
Metaprogramming, in a broad sense, means writing programs that yield other programs. E.g. like templates in C++ produce actual code only when instantiated. One can interpret a template as a program that takes a type as an input and produces an actual function/class as an output. Preprocessor is another kind of metaprogramming. Another made-up example of metaprogramming:a program that reads an XML and produces some SQL scripts according to the XML. Again, in general, a metaprogram is a program that yields another program, whereas generic programming is about parametrized(usually with other types) types(including functions) .
EDITED after considering the comments to this answer
I would roughly define metaprogramming as "writing programs to write programs" and generic programming as "using language features to write functions, classes, etc. parameterized on the data types of arguments or members".
By this standard, C++ templates are useful for both generic programming (think vector, list, sort...) and metaprogramming (think Boost and e.g. Spirit). Furthermore, I would argue that generic programming in C++ (i.e. compile-time polymorphism) is accomplished by metaprogramming (i.e. code generation from templated code).
Generic programming usually refers to functions that can work with many types. E.g. a sort function, which can sort a collection of comparables instead of one sort function to sort an array of ints and another one to sort a vector of strings.
Metaprogramming refers to inspecting, modifying or creating classes, modules or functions programmatically.
Its best to look at other languages, because in C++, a single feature supports both Generic Programming and Metaprogramming. (Templates are very powerful).
In Scheme / Lisp, you can change the grammar of your code. People probably know Scheme as "that prefix language with lots of parenthesis", but it also has very powerful metaprogramming techniques (Hygenic Macros). In particular, try / catch can be created, and even the grammar can be manipulated to a point (For example, here is a prefix to infix converter if you don't want to write prefix code anymore: http://github.com/marcomaggi/nausicaa). This is accomplished through metaprogramming, code that writes code that writes code. This is useful for experimenting with new paradigms of programming (the AMB operator plays an important role in non-deterministic programming. I hope AMB will become mainstream in the next 5 years or so...)
In Java / C#, you can have generic programming through generics. You can write one generic class that supports the types of many other classes. For instance, in Java, you can use Vector to create a Vector of Integers. Or Vector if you want it specific to your own class.
Where things get strange, is that C++ templates are designed for generic programming. However, because of a few tricks, C++ templates themselves are turing-complete. Using these tricks, it is possible to add new features to the C++ language through meta-programming. Its convoluted, but it works. Here's an example which adds multiple dispatch to C++ through templates. http://www.eptacom.net/pubblicazioni/pub_eng/mdisp.html . The more typical example is Fibonacci at compile time: http://blog.emptycrate.com/node/271
Generic programming is a very simple form of metacoding albeit not usually runtime. It's more like the preprocessor in C and relates more to template programming in most use cases and basic implementations.
You'll find often in typed languages that you'll create a few implementations of something where only the type if different. In languages such as Java this can be especially painful since every class and interface is defining a new type.
You can generate those classes by converting them to a string literal then replacing the class name with a variable to interpolate.
Where generics are used in runtime it's a bit different, in that case it's simply variable programming, programming using variables.
The way to envisage that is simple, take to files, compare them and turn anything different into a variable. Now you have only one file that is reusable. You only have to specify what's different, hence the name variable.
How generics came about it that not everything can be made variable like the variable type you expect or a cast type. Often there would by a lot of file duplication where the only thing that was variable was the variable types. This was a very common source of duplication. Although there are ways around it or to mitigate it they aren't particularly convenient. Generics have come along as a kind of variable variable to allow making the variable type variable. Because the variable type is something normally expressing in the programming language that can now be specified in runtime it is also considered metacoding, albeit a very simple case.
The effect of not having variability where you need it is to unroll your variables, that is you are forced instead of having a variable to make an implementation for every possible would be variable value.
As you can imagine that is quite expensive. This would be very common when using any kind of reusage object storage library. These will accept any object but in most cases people only want to sore one type of objdct. If you put in a Shop object which extends Object then want to retrieve it, the method signature on the storage object will return simply Object but your code will expect a Shop object. This will break compilation with the downgrade of the object unless you cast it back up to shop. This raises another conundrum as without generics there is no way to check for compatibility and ensure the object you are storing is a Shop class.
Java avoids metaprogramming and tries to keep the language simple using OOP principles of polymorphism instead to make flexible code. However there are some pressing and reoccurring problems that through experience have presented themselves and are addressed with the addition of minimal metaprogramming facilities. Java does not want to be a metaprogramming language but sparingly imports concepts from there to solve the most nagging problems.
Programming languages that offer lavage metacoding facilities can be significantly more productive than languages than avoid it barring special cases, reflection, OOP polymorphism, etc. However it often also takes a lot more skill and expertise to generate un=nderstandable, maintaiable and bug free code. There is also often a performance penalty for such languages with C++ being a bit of an oddball because it is compiled to native.

About first-,second- and third-class value

First-class value can be
passed as an argument
returned from a subroutine
assigned into a variable.
Second-class value just can be passed as an argument.
Third-class value even can't be passed as an argument.
Why should these things defined like that? As I understand, "can be passed as an argument" means it can be pushed into the runtime stack;"can be assigned into a variable" means it can be moved into a different location of the memory; "can be returned from a subroutine" almost has the same meaning of "can be assigned into a variable" since the returned value always be put into a known address, so first class value is totally "movable" or "dynamic",second class value is half "movable" , and third class value is just "static", such as labels in C/C++ which just can be addressed by goto statement, and you can't do nothing with that address except "goto" .Does My understanding make any sense? or what do these three kinds of values mean exactly?
Oh no, I may have to go edit Wikipedia again.
There are really only two distinctions worth making: first-class and not first-class. If Michael Scott talks about a third-class anything, I'll be very depressed.
Ok, so what is "first-class," anyway? Well, it is a term that barely has a technical meaning. The meaning, when present, is usually comparative, and it applies to a thing in a language (I'm being deliberately vague here) that has more privileges than a comparable thing. That's all people mean by it.
Let's look at some examples:
Function pointers in C are first-class values because they can be passed to functions, returned from functions, and stored in heap-allocated data structures just like any other value. Functions in Pascal and Ada are not first-class values because although they can be passed as arguments, they cannot be returned as results or stored in heap-allocated data structures.
Struct types are second-class types in C, because there are no literal expressions of struct type. (Since C99 there are literal initializers with named fields, but this is still not as general as having a literal anywhere you can use an expression.)
Polymorphic values are second-class values in ML because although they can be let-bound to names, they cannot be lambda-bound. Therefore they cannot be passed as arguments. But in Haskell, because Haskell supports higher-rank polymorphism, polymorphic values are first-class. (They can even be stored in data structures!)
In Java, the type int is second class because you can't inherit from it. Type Integer is first class.
In C, labels are second class, because they don't have values and you can't compute with them. In FORTRAN, line numbers have values and so are first class. There is a GNU extension to C that allows you to define first-class labels, and it is jolly useful. What does first-class mean in this case? It means the labels have values, can be stored in data structures, and can be used in goto. But those values are second class in another sense, because a label from one procedure can't meaningfully be used in a goto that belongs to another procedure.
Are we getting an idea how useless this terminology is?
I hope these examples convince you that the idea of "first-class" is not a very useful idea in thinking about programming languages overall. When you're talking about a particular feature of a particular language or language family, it can be a useful shorthand ("a language isn't functional unless it has first-class, nested functions") but by and large you're better off saying just what you mean instead of talking about "first-class" or "not first-class" things.
As for "third class", just say no.
Something is first-class if it is explicitly manipulable in the code. In other words, something is first-class if it can be programmatically manipulated at run-time.
This closely relates to meta-programming in the sense that what you describe in the code (at development time) is one meta-level, and what exists at run-time is another meta-level. But the barrier between these two meta-levels can be blurred, for instance with reflection. When something is reified at run-time, it becomes explicitly manipulable.
We speak of first-class object, because objects can be manipulated programmatically at run-time (that's the very purpose).
In java, you have classes, but they are not first-class, because the code can normally not manipulate a class unless you use reflection. But in Smalltalk, classes are first-class: the code can manipulate a class like an regular object.
In java, you have packages (modules), but they are not first-class, because the code does not manipulate package at run-time. But in NewSpeak, packages (modules) are first-class, you can instantiate a module and pass it to another module to specify the modularity at run-time.
In C#, you have closures which are first-class functions. They exist and can be manipulated at run-time programmatically. Such things does not exists (yet) in java.
To me, the boundary first-class/not first-class is not exactly strict. It is sometimes hard to pronounce for some language constructs, e.g. java primitive types. We could say it's not first-class because it's not an object and is not manipulable through a reference that can be passed along, but the primitive value does still exists and can be manipulated at run-time.
PS: I agree with Norman Ramsey and 2nd-class and 3rd-class value make no sense to me.
First-class: A first-class construct is one which is an intrinsic element of a language. The following properties must hold.
It must form part of the lexical syntax of the language
It may have operators applied to it
It must be referenceable (for example stored in a variable)
Second-class: A second-class construct is one which is an intrinsic element of the language with the following properties.
It must form part of the lexical syntax of the language
It may have operators applied to it
Third-class: A third-class construct is one which forms part of the syntax of a language.
in
Roger Keays and Andry Rakotonirainy. Context-oriented programming. In Pro- ceedings of the 3rd ACM International Workshop on Data Engineering for Wire- less and Mobile Access, MobiDe ā€™03, pages 9ā€“16, New York, NY, USA, 2003. ACM.
Those terms are very broad and not really globally well defined, but here are the most logical definitions for them:
First-class values are the ones that have actual, tangible values, and so can be operated on and go around, as variables, arguments, return values or whatever.
This doesn't really need a thorough example, does it? In C, an int is first-class.
Second-class values are more limited. They have values, but they can't be used directly, so the compiler deliberately limits what you can do with it. You can reference them, so you can still have a first-class value representing them.
For example, in C, a function is a second-class value. It can't be altered, but it can be called and referenced.
Third-class values are even more limited. They not only don't have values, but interaction is completely absent, and often it only exists to be used as compile-time attributes.
For example, in Rust, a lifetime is a third-class value. You can't use the lifetime at all. You can only receive it as a template parameter, you can only use it as a template parameter (only when creating a new variable), and that's all you can do with it.
Another example, in C++, a struct or a class is a third-class value. This doesn't need much explanation.

Resources