I know that NULL isn't necessary in a programming language, and I recently made the decision not to include NULL in my programming language. Declaration is done by initialization, so it is impossible to have an uninitialized variable. My hope is that this will eliminate the NullPointerException in favor of more meaningful exceptions or simply not having certain kinds of bugs.
Of course, since the language is implemented in C, there will be NULLs used under the covers.
My question is, besides using NULL as an error flag (this is handled with exceptions) or as an endpoint for data structures such as linked lists and binary trees (this is handled with discriminated unions) are there any other use-cases for NULL for which I should have a solution? Are there any really important implications of not having NULL which could cause me problems?
There's a recent article referenced on LtU by Tony Hoare titled Null References: The Billion Dollar Mistake which describes a method to allow the presence of NULLs in a programming language, but also eliminates the risk of referencing such a NULL reference. It seems so simple yet it's such a powerful idea.
Update: here's a link to the actual paper that I read, which talks about the implementation in Eiffel: http://docs.eiffel.com/book/papers/void-safety-how-eiffel-removes-null-pointer-dereferencing
Borrowing a page from Haskell's Maybe monad, how will you handle the case of a return value that may or may not exist? For instance, if you tried to allocate memory but none was available. Or maybe you've created an array to hold 50 foos, but none of the foos have been instantiated yet -- you need some way to be able to check for these kinds of things.
I guess you can use exceptions to cover all these cases, but does that mean that a programmer will have to wrap all of those in a try-catch block? That would be annoying at best. Or everything would have to return its own value plus a boolean indicating whether the value was valid, which is certainly not better.
FWIW, I'm not aware of any program that doesn't have some sort of notion of NULL -- you've got null in all the C-style languages and Java; Python has None, Scheme, Lisp, Smalltalk, Lua, Ruby all have nil; VB uses Nothing; and Haskell has a different kind of nothing.
That doesn't mean a language absolutely has to have some kind of null, but if all of the other big languages out there use it, surely there was some sound reasoning behind it.
On the other hand, if you're only making a lightweight DSL or some other non-general language, you could probably get by without null if none of your native data types require it.
The one that immediately comes to mind is pass-by-reference parameters. I'm primarily an Objective-C coder, so I'm used to seeing things kind of like this:
NSError *error;
[anObject doSomething:anArgumentObject error:&error];
// Error-handling code follows...
After this code executes, the error object has details about the error that was encountered, if any. But say I don't care if an error happens:
[anObject doSomething:anArgumentObject error:nil];
Since I don't pass in any actual value for the error object, I get no results back, and I don't really worry about parsing an error (since I don't care in the first place if it occurs).
You've already mentioned you're handling errors a different way, so this specific example doesn't really apply, but the point stands: what do you do when you pass something back by reference? Or does your language just not do that?
I think it's usefull for a method to return NULL - for example for a search method supposed to return some object, it can return the found object, or NULL if it wasn't found.
I'm starting to learn Ruby and Ruby has a very interesting concept for NULL, maybe you could consider implementing something silimar. In Ruby, NULL is called Nil, and it's an actual object just like any other object. It happens to be implemented as a global Singleton object. Also in Ruby, there is an object False, and both Nil and False evaluate to false in boolean expressions, while everything else evaluates to true (even 0, for example, evaluates to true).
In my mind there are two uses cases for which NULL is generally used:
The variable in question doesn't have a value (Nothing)
We don't know the value of the variable in question (Unknown)
Both of common occurrences and, honestly, using NULL for both can cause confusion.
Worth noting is that some languages that don't support NULL do support the nothing of Nothing/Unknown. Haskell, for instance, supports "Maybe ", which can contain either a value of or Nothing. Thus, commands can return (and accept) a type that they know will always have a value, or they can return/accept "Maybe " to indicate that there may not be a value.
I prefer the concept of having non-nullable pointers be the default, with nullable pointers a possibility. You can almost do this with c++ through references (&) rather than pointers, but it can get quite gnarly and irksome in some cases.
A language can do without null in the Java/C sense, for instance Haskell (and most other functional languages) have a "Maybe" type which is effectively a construct that just provides the concept of an optional null pointer.
It's not clear to me why you would want to eliminate the concept of 'null' from a language. What would you do if your app requires you to do some initialization 'lazily' - that is, you don't perform the operation until the data is needed? Ex:
public class ImLazy {
public ImLazy() {
//I can't initialize resources in my constructor, because I'm lazy.
//Maybe I don't have a network connection available yet, or maybe I'm
//just not motivated enough.
}
private ResourceObject lazyObject;
public ResourceObject getLazyObject() { //initialize then return
if (lazyObject == null) {
lazyObject = new DatabaseNetworkResourceThatTakesForeverToLoad();
}
}
public ResourceObject isObjectLoaded() { //just return the object
return (lazyObject != null);
}
}
In a case like this, how could we return a value for getObject()? We could come up with one of two things:
-require the user to initialize LazyObject in the declaration. The user would then have to fill in some dummy object (UselessResourceObject), which requires them to write all of the same error-checking code (if (lazyObject.equals(UselessResourceObject)...) or:
-come up with some other value, which works the same as null, but has a different name
For any complex/OO language you need this functionality, or something like it, as far as I can see. It may be valuable to have a non-null reference type (for example, in a method signature, so that you don't have to do a null check in the method code), but the null functionality should be available for cases where you do use it.
Interesting discussion happening here.
If I was building a language, I really don't know if I would have the concept of null. I guess it depends on how I want the language to look. Case in point: I wrote a simple templating language whose main strength is nested tokens and ease of making a token a list of values. It doesn't have the concept of null, but then it doesn't really have the concept of any types other than string.
By comparison, the langauge it is built-in, Icon, uses null extensively. Probably the best thing the language designers for Icon did with null is make it synonymous with an uninitialized variable (i.e. you can't tell the difference between a variable that doesn't exist and one that currently holds the value null). And then created two prefix operators to check null and not-null.
In PHP, I sometimes use null as a 'third' boolean value. This is good in "black-box" type classes (e.g. ORM core) where a state can be True, False or I Don't Know. Null is used for the third value.
Of course, both of these languages do not have pointers in the same way C does, so null pointers do not exist.
We use nulls all the time in our application to represent the "nothing" case. For example, if you are asked to look up some data in the database given an id, and no record matches that id: return null. This is very handy because we can store nulls in our cache, which means we don't have to go back to the database if someone asks for that id again in a few seconds.
The cache itself has two different kinds of responses: null, meaning there was no such entry in the cache, or an entry object. The entry object might have a null value, which is the case when we cached a null db lookup.
Our app is written in Java, but even with unchecked exceptions doing this with exceptions would be incredibly annoying.
If one accepts the propositions that powerful languages should have some sort of pointer or reference type (i.e. something which can hold a reference to data which does not exist at compile time), and some form of array type (or other means of having a collection of storage slots which are addressable sequentially via integer index), and that slots of the latter should be able to hold the former, and one accepts the possibility that one may have to read some slots of an array of pointers/references before sensible values exist for all of them, then there will be programs which, from a compiler's perspective, will read an array slot before a sensible value has been written to it (trying to ascertain in the general case whether an array slot could be read before it is written would be equivalent to the Halting Problem).
While it would be possible for a language to require that all array slots be initialized with some non-null reference before any of them could be read, in many situations there isn't really anything that could be stored which would be better than null: if an attempt is made to read an as-yet-unwritten array slot and dereference the (non)item contained there, that represents an error, and it would be better to have the system trap that condition than to access some arbitrary object whose sole purpose for existence is to give the array slots some non-null thing they can reference.
Related
I want to know what are the main differences between Rust's Option::None and "null" in other programming languages. And why is None considered to be better?
A brief history lesson in how to not have a value!
In the beginning, there was C. And in C, we had these things called pointers. Oftentimes, it was useful for a pointer to be initialized later, or to be optional, so the C community decided that there would be a special place in memory, usually memory address zero, which we would all just agree meant "there's no value here". Functions which returned T* could return NULL (written in all-caps in C, since it's a macro) to indicate failure, or lack of value. In a low-level language like C, in that day and age, this worked fairly well. We were only just rising out of the primordial ooze that is assembly language and into the realm of typed (and comparatively safe) languages.
C++ and later Java basically aped this approach. In C++, every pointer could be NULL (later, nullptr was added, which is not a macro and is a slightly more type-safe NULL). In Java, the problem becomes more clear: Every non-primitive value (which means, basically, every value that isn't a number or character) could be null. If my function takes a String as argument, then I always have to be ready to handle the case where some naive young programmer passes me a null. If I get a String as a result from a function, then I have to check the docs to see if it might not actually exist.
That gave us NullPointerException, which even today crowds this very website with tens of questions every day by young programmers falling into traps.
It is clear, in the modern day, that "every value might be null" is not a sustainable approach in a statically typed language. (Dynamically typed languages tend to be more prepared to deal with failure at every point anyway, by their very nature, so the existence of, say, nil in Ruby or None in Python is somewhat less egregious.)
Kotlin, which is often lauded as a "better Java", approaches this problem by introducing nullable types. Now, not every type is nullable. If I have a String, then I actually have a String. If I intend that my String be nullable, then I can opt-in to nullability with String?, which is a type that's either a string or null. Crucially, this is type-safe. If I have a value of type String?, then I can't call String methods on it until I do a null check or make an assertion. So if x has type String?, then I can't do x.toLowerCase() unless I first do one of the following.
Put it inside a if (x != null), to make sure that x is not null (or some other form of control flow that proves my case).
Use the ? null-safe call operator to do x?.toLowerCase(). This will compile to an if (x != null) check and will return a String? which is null if the original string was null.
Use the !! to assert that the value is not null. The assertion is checked and will throw an exception if I turn out to be wrong.
Note that (3) is what Java does by default at every turn. The difference is that, in Kotlin, the "I'm asserting that I know better than the type checker" case is opt-in, and you have to go out of your way to get into the situation where you can get a null pointer exception. (I'm glossing over platform types, which are a convenient hack in the type system to interoperate with Java. It's not really germane here)
Nullable types is how Kotlin solves the problem, and it's how Typescript (with --strict mode) and Scala 3 (with null checks turned on) handle the problem as well. However, it's not really viable in Rust without significant changes to the compiler. That's because nullable types require the language to support subtyping. In a language like Java or Kotlin, which is built using object-oriented principles first, it's easy to introduce new subtypes. String is a subtype of String?, and also of Any (essentially java.lang.Object), and also of Any? (any value and also null). So a string like "A" has a principal type of String but it also has all of those other types, by subtyping.
Rust doesn't really have this concept. Rust has a bit of type coercion available with trait objects, but that's not really subtyping. It's not correct to say that String is a subtype of dyn Display, only that a String (in an unsized context) can be coerced automatically into a dyn Display.
So we need to go back to the drawing board. Nullable types don't really work here. However, fortunately, there's another way to handle the problem of "this value may or may not exist", and it's called optional types. I wouldn't hazard a guess as to what language first tried this idea, but it was definitely popularized by Haskell and is common in more functional languages.
In functional languages, we often have a notion of principal types, similar to in OOP languages. That is, a value x has a "best" type T. In OOP languages with subtyping, x might have other types which are supertypes of T. However, in functional languages without subtyping, x truly only has one type. There are other types that can unify with T, such as (written in Haskell's notation) forall a. a. But it's not really correct to say that the type of x is forall a. a.
The whole nullable type trick in Kotlin relied on the fact that "abc" was both a String and a String?, while null was only a String?. Since we don't have subtyping, we'll need two separate values for the String and String? case.
If we have a structure like this in Rust
struct Foo(i32);
Then Foo(0) is a value of type Foo. Period. End of story. It's not a Foo?, or an optional Foo, or anything like that. It has one type.
However, there's this related value called Some(Foo(0)) which is an Option<Foo>. Note that Foo(0) and Some(Foo(0)) are not the same value, they just happen to be related in a fairly natural way. The difference is that while a Foo must exist, an Option<Foo> could be Some(Foo(0)) or it could be a None, which is kind of like our null in Kotlin. We still have to check whether or not the value is None before we can do anything. In Rust, we generally do that with pattern matching, or by using one of the several built-in functions that do the pattern matching for us. It's the same idea, just implementing using techniques from functional programming.
So if we want to get the value out of an Option, or a default if it doesn't exist, we can write
my_option.unwrap_or(0)
If we want to do something to the inside if it exists, or null out if it doesn't, then we write
my_option.and_then(|inner_value| ...)
This is basically what ?. does in Kotlin. If we want to assert that a value is present and panic otherwise, we can write
my_option.unwrap()
Finally, our general purpose Swiss army knife for dealing with this is pattern matching.
match my_option {
None => {
...
}
Some(value) => {
...
}
}
So we have two different approaches to dealing with this problem: nullable types based on subtyping, and optional types based on composition. "Which is better" is getting into opinion a bit, but I'll try to summarize the arguments I see on both sides.
Advocates of nullable types tend to focus on ergonomics. It's very convenient to be able to do a quick "null check" and then use the same value, rather than having to jump through hoops of unwrapping and wrapping values constantly. It's also nice being able to pass literal values to functions expecting String? or Int? without worrying about bundling them or constantly checking whether they're in a Some or not.
On the other hand, optional types have the benefit of being less "magic". If nullable types didn't exist in Kotlin, we'd be somewhat out of luck. But if Option didn't exist in Rust, someone could write it themselves in a few lines of code. It's not special syntax, it's just a type that exists and has some (ordinary) functions defined on it. It's also built up of composition, which means (naturally) that it composes better. That is, (T?)? is equivalent to T? (the former still only has one null value), while Option<Option<T>> is distinct from Option<T>. I don't recommend writing functions that return Option<Option<T>> directly, but you can end up getting bitten by this when writing generic functions (i.e. your function returns S? and the caller happens to instantiate S with Int?).
I go a bit more into the differences in this post, where someone asked basically the same question but in Elm rather than Rust.
There's two implicit assumptions in the question: first, that other language's "nulls" (and nils and undefs and nones) are all the same. They aren't.
The second assumption is that "null" and "none" provide similar functionality. There's many different uses of null: the value is unknown (SQL trinary logic), the value is a sentinel (C's null pointer and null byte), there was an error (Perl's undef), and to indicate no value (Rust's Option::none).
Null itself comes in at least three different flavors: an invalid value, a keyword, and special objects or types.
Keyword as null
Many languages opt for a specific keyword to indicate null. Perl has undef, Go and Ruby have nil, Python has None. These are useful in that they are distinct from false or 0.
Invalid value as null
Unlike having a keyword, these are things within the language which mean a specific value, but are used as null anyway. The best examples are C's null pointer and null bytes.
Special objects and types as null
Increasingly, languages will use special objects and types for null. These have all the advantages of a keyword, but rather than a generic "something went wrong" they can have very specific meaning per domain. They can also be user-defined offering flexibility beyond what the language designer intended. And, if you use objects, they can have additional information attached.
For example, std::ptr::null in Rust indicates a null raw pointer. Go has the error interface. Rust has Result::Err and Option::None.
Null as unknown
Most programming languages use two-value or binary logic: true or false. But some, particularly SQL, use three-value or trinary logic: true, false, and unknown. Here null means "we do not know what the value is". It's not true, it's not false, it's not an error, it's not empty, it's not zero: the value is unknown. This changes how the logic works.
If you compare unknown to anything, even itself, the result is unknown. This allows you to draw logical conclusions based on incomplete data. For example, Alice is 7'4" tall, we don't know how tall Bob is. Is Alice taller than Bob? Unknown.
Null as uninitialized
When you declare a variable, it has to contain some value. Even in C, which famously does not initialize variables for you, it contains whatever value happened to be sitting in memory. Modern languages will initialize values for you, but they have to initialize it to something. Often that something is null.
Null as sentinel
When you have a list of things and you need to know when the list is done, you can use a sentinel value. This is anything which is not a valid value for the list. For example, if you have a list of positive numbers you can use -1.
The classic example is C. In C, you don't know how long an array is. You need something to tell you to stop. You can pass around the size of the array, or you can use a sentinel value. You read the array until you see the sentinel. A string in C is just an array of characters which ends with a "null byte", that's just a 0 which is an invalid character. If you have an array of pointers, you can end it with a null pointer. The disadvantages are 1) there's not always a truly invalid value and 2) if you forget the sentinel you walk off the end of the array and bad stuff happens.
A more modern example is how to know to stop iterating. For example in Go, for thing := range things { ... }, range will return nil when there are no more items in the range causing the for loop to exit. While this is more flexible, it has the same problem as the classic sentinel: you need a value which will never appear in the list. What if null is a valid value?
Languages such as Python and Ruby choose to solve this problem by raising a special exception when the iteration is done. Both will raise StopIteration which their loops will catch and then exit the loop. This avoids the problem of choosing a sentinel value, Python and Ruby iterators can return anything.
Null as error
While many languages use exceptions for error handling, some languages do not. Instead, they return a special value to indicate an error, often null. C, Go, Perl, and Rust are good examples.
This has the same problem as a sentinel, you need to use a value which is not a valid return value. Sometimes this is not possible. For example, functions in C can only return a single value of a given type. If it returns a pointer, it can return a null pointer to indicate error. But if it returns a number, you have to pick an otherwise valid number as the error value. This conflating the error and return values is a problem.
Go works around this by allowing functions to return multiple values, typically the return value and an error. Then the two values can be checked independently. Rust can only return a single value, so it works around this by returning the special type Result. This contains either an Ok with the return value or an Err with an error code.
In both Rust and Go, they are not just values, but they can have methods called on them expanding their functionality. Rust's Result::Err has the additional advantage of being a special type, you can't accidentally use a Result::Err as anything else.
Null as no value
Finally, we have "none of the given options". Quite simply, there's a set of valid options and the result is none of them. This is distinct from "unknown" in that we know the value is not in the valid set of values. For example, if I asked "which fruit is this car" the result would be null meaning "the car is not any fruit".
When asking for the value of a key in a collection, and that key does not exist, you will get "no value". For example, Rust's HashMap get will return None if the key does not exist.
This is not an error, though they often get confused. For example, Ruby will raise ArgumentError if you pass nonsense into a function. For example, array.first(-2) asks for the first -2 values which is nonsense and will raise an ArgumentError.
Option vs Result
Which finally brings us back to Option::None. It is the "special type" version of null which has many advantages.
Rust uses Option for many things: to indicate an uninitialized value, to indicate simple errors, as no value, and for Rust specific things. The documentation gives numerous examples.
Initial values
Return values for functions that are not defined over their entire input range (partial functions)
Return value for otherwise reporting simple errors, where None is returned on error
Optional struct fields
Struct fields that can be loaned or “taken”
Optional function arguments
Nullable pointers
Swapping things out of difficult situations
To use it in so many places dilutes its value as a special type. It also overlaps with Result which is what you're supposed to use to return results from functions.
The best advice I've seen is to use Result<Option<T>, E> to separate the three possibilities: a valid result, no result, and an error. Jeremy Fitzhardinge gives a good example of what to return from querying a key/value store.
...I'm a big advocate of returning Result<Option<T>, E> where Ok(Some(value)) means "here's the thing you asked for", Ok(None) means "the thing you asked for definitely doesn't exist", and Err(...) means "I can't say whether the thing you asked for exists or not because something went wrong".
In Java, with a little exception, null is of every type. Is there a corresponding object like that in Haskell?
Short answer: Yes
As in any sensible Turing-complete language, infinite loops can be given any type:
loop :: a
loop = loop
This (well, this, or maybe this) is occasionally useful as a temporary placeholder for as-yet-unimplemented functionality or as a signal to readers that we are in a dead branch for reasons that are too tedious to explain to the compiler. But it is generally not used at all analogously to the way null is typically used in Java code.
Normally to signal lack of a value when that's a sensible thing to do, one instead uses
Nothing :: Maybe a
which, while it can't be any type at all, can be the lack of any type at all.
Technically yes, as Daniel Wagner's answer states.
However I would argue that "a value that can be used for every type" and "a value like Java's null" are actually very different requirements. Haskell does not have the latter. I think this is a good thing (as does Tony Hoare, who famously called his invention of null-references a billion-dollar mistake).
Java-like null has no properties except that you can check whether a given reference is equal to it. Anything else you ask of it will blow up at runtime.
Haskell undefined (or error "my bad", or let x = x in x, or fromJust Nothing, or any of the infinite ways of getting at it) has no properties at all. Anything you ask of it will blow up at runtime, including whether any given value is equal to it.
This is a crucial distinction because it makes it near-useless as a "missing" value. It's not possible to do the equivalent of if (thing == null) { do_stuff_without_thing(); } else { do_stuff_with(thing); } using undefined in place of null in Haskell. The only code that can safely handle a possibly-undefined value is code that just never inspects that value at all, and so you can only safely pass undefined to other code when you know that it won't be used in any way1.
Since we can't do "null pointer checks", in Haskell code we almost always use some type T (for arguments, variables, and return types) when we mean there will be a value of type T, and we use Maybe T2 when we mean that there may or may not be a value of type T.
So Haskellers use Nothing roughly where Java programmers would use null, but Nothing is in practice very different from Haskell's version of a value that is of every type. Nothing can't be used on every type, only "Maybe types" - but there is a "Maybe version" of every type. The type distinction between T and Maybe T means that it's clear from the type whether you can omit a value, when you need to handle the possible absence of a value3, etc. In Java you're relying on the documentation being correct (and present) to get that knowledge.
1 Laziness does mean that the "won't be inspected at all" situation can come up a lot more than it would in a strict language like Java, so sub-expressions that may-or-may-not be the bottom value are not that uncommon. But even their use is very different from Java's idioms around values that might be null.
2 Maybe is a data-type with the definition data Maybe a = Nothing | Just a, whether the Nothing constructor contains no other information and the Just constructor just stores a single value of type a. So for a given type T, Maybe T adds an additional "might not be present" feature and nothing else to the base type T.
3 And the Haskell version of handling possible absence is usually using combinators like maybe or fromMaybe, or pattern matching, all of which have the advantage over if (thing == null) that the compiler is aware of which part of the code is handling a missing value and which is handling the value.
Short answer: No
It wouldn't be very type safe to have it. Maybe you can provide more information to your question to understand what you are trying to accomplish.
Edit: Daniel Wagner is right. An infinite loop can be of every type.
Short answer: Yes. But also no.
While it's true that an infinite loop, aka undefined (which are identical in the denotational semantics), inhabits every type, it is usually sufficient to reason about programs as if these values didn't exist, as exhibited in the popular paper Fast and Loose Reasoning is Morally Correct.
Bottom inhabits every type in Haskell. It can be written explicitly as undefined in GHC.
I disagree with almost every other answer to this question.
loop :: a
loop = loop
does not define a value of any type. It does not even define a value.
loop :: a
is a promise to return a value of type a.
loop = loop
is an endless loop, so the promise is broken. Since loop never returns at all, it follows that it never returns a value of type a. So no, even technically, there is no null value in Haskell.
The closest thing to null is to use Maybe. With Maybe you have Nothing, and this is used in many contexts. It is also much more explicit.
A similar argument can be used for undefined. When you use undefined in a non-strict setting, you just have a thunk that will throw an error as soon as it is evaluated. But it will never give you a value of the promised type.
Haskell has a bottom type because it is unavoidable. Due to the halting problem, you can never prove that a function will actually return at all, so it is always possible to break promises. Just because someone promises to give you 100$, it does not mean that you will actually get it. He can always say "I didn't specify when you will get the money" or just refuse to keep the promise. The promise doesn't even prove that he has the money or that he would be able to provide it when asked about it.
An example from the Apple-world..
Objective C had a null value and it has been called nil. The newer Swift language switched to an Optional type, where Optional<a> can be abbreviated to a?. It behaves exactly like Haskells Maybe monad. Why did they do this? Maybe because of Tony Hoare's apology. Maybe because Haskell was one of Swifts role-models.
Looking at some pieces of code around the internet, I've noticed some authors tend to write string comparisons like
if("String"==$variable)
in PHP, or
if("String".equals(variable))
Whereas my preference is:
if(variable.equals("String"))
I realize these are effectively equal: they compare two strings for equality. But I was curious if there was an advantage to one over the other in terms of performance or something else.
Thank you for the help!
One example to the approach of using an equality function or using if( constant == variable ) rather than if( variable == constant ) is that it prevents you from accidentally typoing and writing an assignment instead of a comparison, for instance:
if( s = "test" )
Will assign "test" to s, which will result in undesired behaviour which may potentially cause a hard-to-find bug. However:
if( "test" = s )
Will in most languages (that I'm aware of) result in some form of warning or compiler error, helping to avoid a bug later on.
With a simple int example, this prevents accidental writes of
if (a=5)
which would be a compile error if written as
if (5=a)
I sure don't know about all languages, but decent C compilers warn you about if (a=b). Perhaps whatever language your question is written in doesn't have such a feature, so to be able to generate an error in such a case, they have reverted the order of the comparison arguments.
Yoda conditions call these some.
The kind of syntaxis a language uses has nothing to do with efficiency. It is all about how the comparison algorithm works.
In the examples you mentioned, this:
if("String".equals(variable))
and this:
if(variable.equals("String"))
would be exactly the same, because the expression "String" will be treated as a String variable.
Languages that provide a comparison method for Strings, will use the fastest method so you shouldn't care about it, unless you want to implement the method yourself ;)
I have an upcoming project in which a core requirement will be to mutate the way a method works at runtime. Note that I'm not talking about a higher level OO concept like "shadow one method with another", although the practical effect would be similar.
The key properties I'm after are:
I must be able to modify the method in such a way that I can add new expressions, remove existing expressions, or modify any of the expressions that take place in it.
After modifying the method, subsequent calls to that method would invoke the new sequence of operations. (Or, if the language binds methods rather than evaluating every single time, provide me a way to unbind/rebind the new method.)
Ideally, I would like to manipulate the atomic units of the language (e.g., "invoke method foo on object bar") and not the assembly directly (e.g. "pop these three parameters onto the stack"). In other words, I'd like to be able to have high confidence that the operations I construct are semantically meaningful in the language. But I'll take what I can get.
If you're not sure if a candidate language meets these criteria, here's a simple litmus test:
Can you write another method called clean which:
accepts a method m as input
returns another method m2 that performs the same operations as m
such that m2 is identical to m, but doesn't contain any calls to the print-to-standard-out method in your language (puts, System.Console.WriteLn, println, etc.)?
I'd like to do some preliminary research now and figure out what the strongest candidates are. Having a large, active community is as important to me as the practicality of implementing what I want to do. I am aware that there may be some unforged territory here, since manipulating bytecode directly is not typically an operation that needs to be exposed.
What are the choices available to me? If possible, can you provide a toy example in one or more of the languages that you recommend, or point me to a recent example?
Update: The reason I'm after this is that I'd like to write a program which is capable of modifying itself at runtime in response to new information. This modification goes beyond mere parameters or configurable data, but full-fledged, evolved changes in behavior. (No, I'm not writing a virus. ;) )
Well, you could always use .NET and the Expression libraries to build up expressions. That I think is really your best bet as you can build up representations of commands in memory and there is good library support for manipulating, traversing, etc.
Well, those languages with really strong macro support (in particular Lisps) could qualify.
But are you sure you actually need to go this deeply? I don't know what you're trying to do, but I suppose you could emulate it without actually getting too deeply into metaprogramming. Say, instead of using a method and manipulating it, use a collection of functions (with some way of sharing state, e.g. an object holding state passed to each).
I would say Groovy can do this.
For example
class Foo {
void bar() {
println "foobar"
}
}
Foo.metaClass.bar = {->
prinltn "barfoo"
}
Or a specific instance of foo without effecting other instances
fooInstance.metaClass.bar = {->
println "instance barfoo"
}
Using this approach I can modify, remove or add expression from the method and Subsequent calls will use the new method. You can do quite a lot with the Groovy metaClass.
In java, many professional framework do so using the open source ASM framework.
Here is a list of all famous java apps and libs including ASM.
A few years ago BCEL was also very much used.
There are languages/environments that allows a real runtime modification - for example, Common Lisp, Smalltalk, Forth. Use one of them if you really know what you're doing. Otherwise you can simply employ an interpreter pattern for an evolving part of your code, it is possible (and trivial) with any OO or functional language.
First-class value can be
passed as an argument
returned from a subroutine
assigned into a variable.
Second-class value just can be passed as an argument.
Third-class value even can't be passed as an argument.
Why should these things defined like that? As I understand, "can be passed as an argument" means it can be pushed into the runtime stack;"can be assigned into a variable" means it can be moved into a different location of the memory; "can be returned from a subroutine" almost has the same meaning of "can be assigned into a variable" since the returned value always be put into a known address, so first class value is totally "movable" or "dynamic",second class value is half "movable" , and third class value is just "static", such as labels in C/C++ which just can be addressed by goto statement, and you can't do nothing with that address except "goto" .Does My understanding make any sense? or what do these three kinds of values mean exactly?
Oh no, I may have to go edit Wikipedia again.
There are really only two distinctions worth making: first-class and not first-class. If Michael Scott talks about a third-class anything, I'll be very depressed.
Ok, so what is "first-class," anyway? Well, it is a term that barely has a technical meaning. The meaning, when present, is usually comparative, and it applies to a thing in a language (I'm being deliberately vague here) that has more privileges than a comparable thing. That's all people mean by it.
Let's look at some examples:
Function pointers in C are first-class values because they can be passed to functions, returned from functions, and stored in heap-allocated data structures just like any other value. Functions in Pascal and Ada are not first-class values because although they can be passed as arguments, they cannot be returned as results or stored in heap-allocated data structures.
Struct types are second-class types in C, because there are no literal expressions of struct type. (Since C99 there are literal initializers with named fields, but this is still not as general as having a literal anywhere you can use an expression.)
Polymorphic values are second-class values in ML because although they can be let-bound to names, they cannot be lambda-bound. Therefore they cannot be passed as arguments. But in Haskell, because Haskell supports higher-rank polymorphism, polymorphic values are first-class. (They can even be stored in data structures!)
In Java, the type int is second class because you can't inherit from it. Type Integer is first class.
In C, labels are second class, because they don't have values and you can't compute with them. In FORTRAN, line numbers have values and so are first class. There is a GNU extension to C that allows you to define first-class labels, and it is jolly useful. What does first-class mean in this case? It means the labels have values, can be stored in data structures, and can be used in goto. But those values are second class in another sense, because a label from one procedure can't meaningfully be used in a goto that belongs to another procedure.
Are we getting an idea how useless this terminology is?
I hope these examples convince you that the idea of "first-class" is not a very useful idea in thinking about programming languages overall. When you're talking about a particular feature of a particular language or language family, it can be a useful shorthand ("a language isn't functional unless it has first-class, nested functions") but by and large you're better off saying just what you mean instead of talking about "first-class" or "not first-class" things.
As for "third class", just say no.
Something is first-class if it is explicitly manipulable in the code. In other words, something is first-class if it can be programmatically manipulated at run-time.
This closely relates to meta-programming in the sense that what you describe in the code (at development time) is one meta-level, and what exists at run-time is another meta-level. But the barrier between these two meta-levels can be blurred, for instance with reflection. When something is reified at run-time, it becomes explicitly manipulable.
We speak of first-class object, because objects can be manipulated programmatically at run-time (that's the very purpose).
In java, you have classes, but they are not first-class, because the code can normally not manipulate a class unless you use reflection. But in Smalltalk, classes are first-class: the code can manipulate a class like an regular object.
In java, you have packages (modules), but they are not first-class, because the code does not manipulate package at run-time. But in NewSpeak, packages (modules) are first-class, you can instantiate a module and pass it to another module to specify the modularity at run-time.
In C#, you have closures which are first-class functions. They exist and can be manipulated at run-time programmatically. Such things does not exists (yet) in java.
To me, the boundary first-class/not first-class is not exactly strict. It is sometimes hard to pronounce for some language constructs, e.g. java primitive types. We could say it's not first-class because it's not an object and is not manipulable through a reference that can be passed along, but the primitive value does still exists and can be manipulated at run-time.
PS: I agree with Norman Ramsey and 2nd-class and 3rd-class value make no sense to me.
First-class: A first-class construct is one which is an intrinsic element of a language. The following properties must hold.
It must form part of the lexical syntax of the language
It may have operators applied to it
It must be referenceable (for example stored in a variable)
Second-class: A second-class construct is one which is an intrinsic element of the language with the following properties.
It must form part of the lexical syntax of the language
It may have operators applied to it
Third-class: A third-class construct is one which forms part of the syntax of a language.
in
Roger Keays and Andry Rakotonirainy. Context-oriented programming. In Pro- ceedings of the 3rd ACM International Workshop on Data Engineering for Wire- less and Mobile Access, MobiDe ’03, pages 9–16, New York, NY, USA, 2003. ACM.
Those terms are very broad and not really globally well defined, but here are the most logical definitions for them:
First-class values are the ones that have actual, tangible values, and so can be operated on and go around, as variables, arguments, return values or whatever.
This doesn't really need a thorough example, does it? In C, an int is first-class.
Second-class values are more limited. They have values, but they can't be used directly, so the compiler deliberately limits what you can do with it. You can reference them, so you can still have a first-class value representing them.
For example, in C, a function is a second-class value. It can't be altered, but it can be called and referenced.
Third-class values are even more limited. They not only don't have values, but interaction is completely absent, and often it only exists to be used as compile-time attributes.
For example, in Rust, a lifetime is a third-class value. You can't use the lifetime at all. You can only receive it as a template parameter, you can only use it as a template parameter (only when creating a new variable), and that's all you can do with it.
Another example, in C++, a struct or a class is a third-class value. This doesn't need much explanation.