Related
I've noticed that many operations on lists that modify the list's contents will return None, rather than returning the list itself. Examples:
>>> mylist = ['a', 'b', 'c']
>>> empty = mylist.clear()
>>> restored = mylist.extend(range(3))
>>> backwards = mylist.reverse()
>>> with_four = mylist.append(4)
>>> in_order = mylist.sort()
>>> without_one = mylist.remove(1)
>>> mylist
[0, 2, 4]
>>> [empty, restored, backwards, with_four, in_order, without_one]
[None, None, None, None, None, None]
What is the thought process behind this decision?
To me, it seems hampering, since it prevents "chaining" of list processing (e.g. mylist.reverse().append('a string')[:someLimit]). I imagine it might be that "The Powers That Be" decided that list comprehension is a better paradigm (a valid opinion), and so didn't want to encourage other methods - but it seems perverse to prevent an intuitive method, even if better alternatives exist.
This question is specifically about Python's design decision to return None from mutating list methods like .append. Novices often write incorrect code that expects .append (in particular) to return the same list that was just modified.
For the simple question of "how do I append to a list?" (or debugging questions that boil down to that problem), see Why does "x = x.append([i])" not work in a for loop?.
To get modified versions of the list, see:
For .sort: How can I get a sorted copy of a list?
For .reverse: How can I get a reversed copy of a list (avoid a separate statement when chaining a method after .reverse)?
The same issue applies to some methods of other built-in data types, e.g. set.discard (see How to remove specific element from sets inside a list using list comprehension) and dict.update (see Why doesn't a python dict.update() return the object?).
The same reasoning applies to designing your own APIs. See Is making in-place operations return the object a bad idea?.
The general design principle in Python is for functions that mutate an object in-place to return None. I'm not sure it would have been the design choice I'd have chosen, but it's basically to emphasise that a new object is not returned.
Guido van Rossum (our Python BDFL) states the design choice on the Python-Dev mailing list:
I'd like to explain once more why I'm so adamant that sort() shouldn't
return 'self'.
This comes from a coding style (popular in various other languages, I
believe especially Lisp revels in it) where a series of side effects
on a single object can be chained like this:
x.compress().chop(y).sort(z)
which would be the same as
x.compress()
x.chop(y)
x.sort(z)
I find the chaining form a threat to readability; it requires that the
reader must be intimately familiar with each of the methods. The
second form makes it clear that each of these calls acts on the same
object, and so even if you don't know the class and its methods very
well, you can understand that the second and third call are applied to
x (and that all calls are made for their side-effects), and not to
something else.
I'd like to reserve chaining for operations that return new values,
like string processing operations:
y = x.rstrip("\n").split(":").lower()
There are a few standard library modules that encourage chaining of
side-effect calls (pstat comes to mind). There shouldn't be any new
ones; pstat slipped through my filter when it was weak.
I can't speak for the developers, but I find this behavior very intuitive.
If a method works on the original object and modifies it in-place, it doesn't return anything, because there is no new information - you obviously already have a reference to the (now mutated) object, so why return it again?
If, however, a method or function creates a new object, then of course it has to return it.
So l.reverse() returns nothing (because now the list has been reversed, but the identfier l still points to that list), but reversed(l) has to return the newly generated list because l still points to the old, unmodified list.
EDIT: I just learned from another answer that this principle is called Command-Query separation.
One could argue that the signature itself makes it clear that the function mutates the list rather than returning a new one: if the function returned a list, its behavior would have been much less obvious.
If you were sent here after asking for help fixing your code:
In the future, please try to look for problems in the code yourself, by carefully studying what happens when the code runs. Rather than giving up because there is an error message, check the result of each calculation, and see where the code starts working differently from what you expect.
If you had code calling a method like .append or .sort on a list, you will notice that the return value is None, while the list is modified in place. Study the example carefully:
>>> x = ['e', 'x', 'a', 'm', 'p', 'l', 'e']
>>> y = x.sort()
>>> print(y)
None
>>> print(x)
['a', 'e', 'e', 'l', 'm', 'p', 'x']
y got the special None value, because that is what was returned. x changed, because the sort happened in place.
It works this way on purpose, so that code like x.sort().reverse() breaks. See the other answers to understand why the Python developers wanted it that way.
To fix the problem
First, think carefully about the intent of the code. Should x change? Do we actually need a separate y?
Let's consider .sort first. If x should change, then call x.sort() by itself, without assigning the result anywhere.
If a sorted copy is needed instead, use y = x.sorted(). See How can I get a sorted copy of a list? for details.
For other methods, we can get modified copies like so:
.clear -> there is no point to this; a "cleared copy" of the list is just an empty list. Just use y = [].
.append and .extend -> probably the simplest way is to use the + operator. To add multiple elements from a list l, use y = x + l rather than .extend. To add a single element e wrap it in a list first: y = x + [e]. Another way in 3.5 and up is to use unpacking: y = [*x, *l] for .extend, y = [*x, e] for .append. See also How to allow list append() method to return the new list for .append and How do I concatenate two lists in Python? for .extend.
.reverse -> First, consider whether an actual copy is needed. The built-in reversed gives you an iterator that can be used to loop over the elements in reverse order. To make an actual copy, simply pass that iterator to list: y = list(reversed(x)). See How can I get a reversed copy of a list (avoid a separate statement when chaining a method after .reverse)? for details.
.remove -> Figure out the index of the element that will be removed (using .index), then use slicing to find the elements before and after that point and put them together. As a function:
def without(a_list, value):
index = a_list.index(value)
return a_list[:index] + a_list[index+1:]
(We can translate .pop similarly to make a modified copy, though of course .pop actually returns an element from the list.)
See also A quick way to return list without a specific element in Python.
(If you plan to remove multiple elements, strongly consider using a list comprehension (or filter) instead. It will be much simpler than any of the workarounds needed for removing items from the list while iterating over it. This way also naturally gives a modified copy.)
For any of the above, of course, we can also make a modified copy by explicitly making a copy and then using the in-place method on the copy. The most elegant approach will depend on the context and on personal taste.
As we know list in python is a mutable object and one of characteristics of mutable object is the ability to modify the state of this object without the need to assign its new state to a variable. we should demonstrate more about this topic to understand the root of this issue.
An object whose internal state can be changed is mutable. On the other hand, immutable doesn’t allow any change in the object once it has been created. Object mutability is one of the characteristics that makes Python a dynamically typed language.
Every object in python has three attributes:
Identity – This refers to the address that the object refers to in the computer’s memory.
Type – This refers to the kind of object that is created. For example integer, list, string etc.
Value – This refers to the value stored by the object. For example str = "a".
While ID and Type cannot be changed once it’s created, values can be changed for Mutable objects.
let us discuss the below code step-by-step to depict what it means in Python:
Creating a list which contains name of cities
cities = ['London', 'New York', 'Chicago']
Printing the location of the object created in the memory address in hexadecimal format
print(hex(id(cities)))
Output [1]: 0x1691d7de8c8
Adding a new city to the list cities
cities.append('Delhi')
Printing the elements from the list cities, separated by a comma
for city in cities:
print(city, end=', ')
Output [2]: London, New York, Chicago, Delhi
Printing the location of the object created in the memory address in hexadecimal format
print(hex(id(cities)))
Output [3]: 0x1691d7de8c8
The above example shows us that we were able to change the internal state of the object cities by adding one more city 'Delhi' to it, yet, the memory address of the object did not change. This confirms that we did not create a new object, rather, the same object was changed or mutated. Hence, we can say that the object which is a type of list with reference variable name cities is a MUTABLE OBJECT.
While the immutable object internal state can not be changed. For instance, consider the below code and associated error message with it, while trying to change the value of a Tuple at index 0
Creating a Tuple with variable name foo
foo = (1, 2)
Changing the index 0 value from 1 to 3
foo[0] = 3
TypeError: 'tuple' object does not support item assignment
We can conclude from the examples why mutable object shouldn't return anything when executing operations on it because it's modifying the internal state of the object directly and there is no point in returning new modified object. unlike immutable object which should return new object of the modified state after executing operations on it.
First of All, I should tell that what I am suggesting is without a doubt, a bad programming practice but if you want to use append in lambda function and you don't care about the code readability, there is way to just do that.
Imagine you have a list of lists and you want to append a element to each inner lists using map and lambda. here is how you can do that:
my_list = [[1, 2, 3, 4],
[3, 2, 1],
[1, 1, 1]]
my_new_element = 10
new_list = list(map(lambda x: [x.append(my_new_element), x][1], my_list))
print(new_list)
How it works:
when lambda wants to calculate to output, first it should calculate the [x.append(my_new_element), x] expression. To calculate this expression the append function will run and the result of expression will be [None, x] and by specifying that you want the second element of the list the result of [None,x][1] will be x
Using custom function is more readable and the better option:
def append_my_list(input_list, new_element):
input_list.append(new_element)
return input_list
my_list = [[1, 2, 3, 4],
[3, 2, 1],
[1, 1, 1]]
my_new_element = 10
new_list = list(map(lambda x: append_my_list(x, my_new_element), my_list))
print(new_list)
I'm new to functional languages and I was wondering why we can't pass a parameter by reference.
I found anserws saying that
you are not supposed to change the state of objects once they have been created
but I didn't quite get the idea.
It's not so much that you can't pass references, it's that with referential transparency there isn't a programmer-visible difference between references and values, because you aren't allowed to change what references point to. This makes it actually safer and more prevalent in pure functional programming to pass shared references around everywhere. From a semantic point of view, they may as well be values.
I think you have misunderstood the concept. Both Scheme and C/C++ are pass by value languages and most values are addresses (references).
Purely functional languages can have references and those are passed by value. What they don't have is redefining variables in the same scope (mutate bindings) and they don't have the possibility to update the object the reference points to. All operations return a fresh new object.
As an example I can give you Java's strings. Java is not purely functional but its strings are. If you change the string to uppercase you get a new string object in return and the original one has not been altered.
Most languages I know of are pass by value. Pass by name is alien to me.
Because if you pass params by reference you could change something in the parameter, which could introduce a side effect. Consider this:
function pay(person, cost) {
person.wallet -= cost;
}
function money(person) {
return person.wallet;
}
let joe = { name: "Joe", wallet: 300 };
console.log(money(joe)); // 300
pay(joe, 20);
console.log(money(joe)); // 280
The two money(joe) are taking the same input (the object joe) and giving different output (300, 280). This is in contradiction to the definition of a pure functional language (all functions must return the same output when given the same input).
If the program was made this way, there is no problem:
function pay(person, cost) {
return Object.freeze({ ...person, wallet: person.wallet - cost });
}
function money(person) {
return person.wallet;
}
let joe = Object.freeze({ name: "Joe", wallet: 300 });
console.log(money(joe)); // 300
let joe_with_less_money = pay(joe, 20);
console.log(money(joe)); // still 300
console.log(money(joe_with_less_money)); // 280
Here we have to fake pass-by-value by freezing objects (which makes them immutable) since JavaScript can pass parameters only one way (pass by sharing), but the idea is the same.
(This presupposes the implications of the term "pass-by-reference" that apply to languages like C++, where the implementation detail affects mutability, not the actual implementation detail of modern languages, where references are typically passed under the hood but immutability is assured by other means.)
I have an immutable structure with four objects defined as follows:
struct FltFric
muS::Array{Float64, 2}
muD::Array{Float64, 2}
Dc::Float64
W::Array{Float64, 2}
end
muS = repmat([0.6], 100, 1) # Coefficient of static friction
muD = repmat([0.5], 100, 1) # Coefficient of dynamic friction
Dc = 0.1 # Critical slip distance
FltFriction = FltFric(muS, muD, Dc, zeros(size(muS)))
I am modifying the values of FltFric.muS as follows:
FltFriction.muS[1:20] = 100
This works fine. But when I try to modify the value of W
FltFriction.W = (FltFriction.muS - FltFriction.muD)./(FltFriction.Dc)
This gives me an error: type FltFric is immutable.
Why does the first statement not give error while the second one does? If the type is immutable, both statements should give an error. What is the difference between the two assignments?
I know that I can circumvent the problem by typing mutable struct, but I don't understand the difference in my two assignments.
I am not a Julia expert, but I think this is a more general question.
In the first assignment, you're modifying certain elements of the list FltFriction.muS. This is fine since although the struct is immutable, the list referred to by .muS is mutable. In other words, you're mutating the list by changing its elements, rather than mutating the struct.
In the second assignment, you're trying to replace the entire list .W in one fell swoop. In this case you're trying to mutate the struct directly, replacing one of its elements. For this reason, the second assignment fails while the first one succeeds.
I'm speculating here, but I suspect that if you tried to do the second assignment like so:
FltFriction.W[1:end] = ...
Then you would be fine, since you're mutating the list instead of the struct.
As pointed out by a commenter (see below), in Julia there is a "more idiomatic (and more performant)" way to do this correctly and without mutating the struct itself by using the in-place assignment operator (neat!):
FltFriction.W .= (FltFriction.muS - FltFriction.muD)./FltFriction.Dc
Coming from a C# background, I would say that the ref keyword is very useful in certain situations where changes to a method parameter are desired to directly influence the passed value for value types of for setting a parameter to null.
Also, the out keyword can come in handy when returning a multitude of various logically unconnected values.
My question is: is it possible to pass a parameter to a function by reference in Haskell? If not, what is the direct alternative (if any)?
There is no difference between "pass-by-value" and "pass-by-reference" in languages like Haskell and ML, because it's not possible to assign to a variable in these languages. It's not possible to have "changes to a method parameter" in the first place in influence any passed variable.
It depends on context. Without any context, no, you can't (at least not in the way you mean). With context, you may very well be able to do this if you want. In particular, if you're working in IO or ST, you can use IORef or STRef respectively, as well as mutable arrays, vectors, hash tables, weak hash tables (IO only, I believe), etc. A function can take one or more of these and produce an action that (when executed) will modify the contents of those references.
Another sort of context, StateT, gives the illusion of a mutable "state" value implemented purely. You can use a compound state and pass around lenses into it, simulating references for certain purposes.
My question is: is it possible to pass a parameter to a function by reference in Haskell? If not, what is the direct alternative (if any)?
No, values in Haskell are immutable (well, the do notation can create some illusion of mutability, but it all happens inside a function and is an entirely different topic). If you want to change the value, you will have to return the changed value and let the caller deal with it. For instance, see the random number generating function next that returns the value and the updated RNG.
Also, the out keyword can come in handy when returning a multitude of various logically unconnected values.
Consequently, you can't have out either. If you want to return several entirely disconnected values (at which point you should probably think why are disconnected values being returned from a single function), return a tuple.
No, it's not possible, because Haskell variables are immutable, therefore, the creators of Haskell must have reasoned there's no point of passing a reference that cannot be changed.
consider a Haskell variable:
let x = 37
In order to change this, we need to make a temporary variable, and then set the first variable to the temporary variable (with modifications).
let tripleX = x * 3
let x = tripleX
If Haskell had pass by reference, could we do this?
The answer is no.
Suppose we tried:
tripleVar :: Int -> IO()
tripleVar var = do
let times_3 = var * 3
let var = times_3
The problem with this code is the last line; Although we can imagine the variable being passed by reference, the new variable isn't.
In other words, we're introducing a new local variable with the same name;
Take a look again at the last line:
let var = times_3
Haskell doesn't know that we want to "change" a global variable; since we can't reassign it, we are creating a new variable with the same name on the local scope, thus not changing the reference. :-(
tripleVar :: Int -> IO()
tripleVar var = do
let tripleVar = var
let var = tripleVar * 3
return()
main = do
let x = 4
tripleVar x
print x -- 4 :(
Every so often when programmers are complaining about null errors/exceptions someone asks what we do without null.
I have some basic idea of the coolness of option types, but I don't have the knowledge or languages skill to best express it. What is a great explanation of the following written in a way approachable to the average programmer that we could point that person towards?
The undesirability of having references/pointers be nullable by default
How option types work including strategies to ease checking null cases such as
pattern matching and
monadic comprehensions
Alternative solution such as message eating nil
(other aspects I missed)
I think the succinct summary of why null is undesirable is that meaningless states should not be representable.
Suppose I'm modeling a door. It can be in one of three states: open, shut but unlocked, and shut and locked. Now I could model it along the lines of
class Door
private bool isShut
private bool isLocked
and it is clear how to map my three states into these two boolean variables. But this leaves a fourth, undesired state available: isShut==false && isLocked==true. Because the types I have selected as my representation admit this state, I must expend mental effort to ensure that the class never gets into this state (perhaps by explicitly coding an invariant). In contrast, if I were using a language with algebraic data types or checked enumerations that lets me define
type DoorState =
| Open | ShutAndUnlocked | ShutAndLocked
then I could define
class Door
private DoorState state
and there are no more worries. The type system will ensure that there are only three possible states for an instance of class Door to be in. This is what type systems are good at - explicitly ruling out a whole class of errors at compile-time.
The problem with null is that every reference type gets this extra state in its space that is typically undesired. A string variable could be any sequence of characters, or it could be this crazy extra null value that doesn't map into my problem domain. A Triangle object has three Points, which themselves have X and Y values, but unfortunately the Points or the Triangle itself might be this crazy null value that is meaningless to the graphing domain I'm working in. Etc.
When you do intend to model a possibly-non-existent value, then you should opt into it explicitly. If the way I intend to model people is that every Person has a FirstName and a LastName, but only some people have MiddleNames, then I would like to say something like
class Person
private string FirstName
private Option<string> MiddleName
private string LastName
where string here is assumed to be a non-nullable type. Then there are no tricky invariants to establish and no unexpected NullReferenceExceptions when trying to compute the length of someone's name. The type system ensures that any code dealing with the MiddleName accounts for the possibility of it being None, whereas any code dealing with the FirstName can safely assume there is a value there.
So for example, using the type above, we could author this silly function:
let TotalNumCharsInPersonsName(p:Person) =
let middleLen = match p.MiddleName with
| None -> 0
| Some(s) -> s.Length
p.FirstName.Length + middleLen + p.LastName.Length
with no worries. In contrast, in a language with nullable references for types like string, then assuming
class Person
private string FirstName
private string MiddleName
private string LastName
you end up authoring stuff like
let TotalNumCharsInPersonsName(p:Person) =
p.FirstName.Length + p.MiddleName.Length + p.LastName.Length
which blows up if the incoming Person object does not have the invariant of everything being non-null, or
let TotalNumCharsInPersonsName(p:Person) =
(if p.FirstName=null then 0 else p.FirstName.Length)
+ (if p.MiddleName=null then 0 else p.MiddleName.Length)
+ (if p.LastName=null then 0 else p.LastName.Length)
or maybe
let TotalNumCharsInPersonsName(p:Person) =
p.FirstName.Length
+ (if p.MiddleName=null then 0 else p.MiddleName.Length)
+ p.LastName.Length
assuming that p ensures first/last are there but middle can be null, or maybe you do checks that throw different types of exceptions, or who knows what. All these crazy implementation choices and things to think about crop up because there's this stupid representable-value that you don't want or need.
Null typically adds needless complexity. Complexity is the enemy of all software, and you should strive to reduce complexity whenever reasonable.
(Note well that there is more complexity to even these simple examples. Even if a FirstName cannot be null, a string can represent "" (the empty string), which is probably also not a person name that we intend to model. As such, even with non-nullable strings, it still might be the case that we are "representing meaningless values". Again, you could choose to battle this either via invariants and conditional code at runtime, or by using the type system (e.g. to have a NonEmptyString type). The latter is perhaps ill-advised ("good" types are often "closed" over a set of common operations, and e.g. NonEmptyString is not closed over .SubString(0,0)), but it demonstrates more points in the design space. At the end of the day, in any given type system, there is some complexity it will be very good at getting rid of, and other complexity that is just intrinsically harder to get rid of. The key for this topic is that in nearly every type system, the change from "nullable references by default" to "non-nullable references by default" is nearly always a simple change that makes the type system a great deal better at battling complexity and ruling out certain types of errors and meaningless states. So it is pretty crazy that so many languages keep repeating this error again and again.)
The nice thing about option types isn't that they're optional. It is that all other types aren't.
Sometimes, we need to be able to represent a kind of "null" state. Sometimes we have to represent a "no value" option as well as the other possible values a variable may take. So a language that flat out disallows this is going to be a bit crippled.
But often, we don't need it, and allowing such a "null" state only leads to ambiguity and confusion: every time I access a reference type variable in .NET, I have to consider that it might be null.
Often, it will never actually be null, because the programmer structures the code so that it can never happen. But the compiler can't verify that, and every single time you see it, you have to ask yourself "can this be null? Do I need to check for null here?"
Ideally, in the many cases where null doesn't make sense, it shouldn't be allowed.
That's tricky to achieve in .NET, where nearly everything can be null. You have to rely on the author of the code you're calling to be 100% disciplined and consistent and have clearly documented what can and cannot be null, or you have to be paranoid and check everything.
However, if types aren't nullable by default, then you don't need to check whether or not they're null. You know they can never be null, because the compiler/type checker enforces that for you.
And then we just need a back door for the rare cases where we do need to handle a null state. Then an "option" type can be used. Then we allow null in the cases where we've made a conscious decision that we need to be able to represent the "no value" case, and in every other case, we know that the value will never be null.
As others have mentioned, in C# or Java for example, null can mean one of two things:
the variable is uninitialized. This should, ideally, never happen. A variable shouldn't exist unless it is initialized.
the variable contains some "optional" data: it needs to be able to represent the case where there is no data. This is sometimes necessary. Perhaps you're trying to find an object in a list, and you don't know in advance whether or not it's there. Then we need to be able to represent that "no object was found".
The second meaning has to be preserved, but the first one should be eliminated entirely. And even the second meaning should not be the default. It's something we can opt in to if and when we need it. But when we don't need something to be optional, we want the type checker to guarantee that it will never be null.
All of the answers so far focus on why null is a bad thing, and how it's kinda handy if a language can guarantee that certain values will never be null.
They then go on to suggest that it would be a pretty neat idea if you enforce non-nullability for all values, which can be done if you add a concept like Option or Maybe to represent types that may not always have a defined value. This is the approach taken by Haskell.
It's all good stuff! But it doesn't preclude the use of explicitly nullable / non-null types to achieve the same effect. Why, then, is Option still a good thing? After all, Scala supports nullable values (is has to, so it can work with Java libraries) but supports Options as well.
Q. So what are the benefits beyond being able to remove nulls from a language entirely?
A. Composition
If you make a naive translation from null-aware code
def fullNameLength(p:Person) = {
val middleLen =
if (null == p.middleName)
p.middleName.length
else
0
p.firstName.length + middleLen + p.lastName.length
}
to option-aware code
def fullNameLength(p:Person) = {
val middleLen = p.middleName match {
case Some(x) => x.length
case _ => 0
}
p.firstName.length + middleLen + p.lastName.length
}
there's not much difference! But it's also a terrible way to use Options... This approach is much cleaner:
def fullNameLength(p:Person) = {
val middleLen = p.middleName map {_.length} getOrElse 0
p.firstName.length + middleLen + p.lastName.length
}
Or even:
def fullNameLength(p:Person) =
p.firstName.length +
p.middleName.map{length}.getOrElse(0) +
p.lastName.length
When you start dealing with List of Options, it gets even better. Imagine that the List people is itself optional:
people flatMap(_ find (_.firstName == "joe")) map (fullNameLength)
How does this work?
//convert an Option[List[Person]] to an Option[S]
//where the function f takes a List[Person] and returns an S
people map f
//find a person named "Joe" in a List[Person].
//returns Some[Person], or None if "Joe" isn't in the list
validPeopleList find (_.firstName == "joe")
//returns None if people is None
//Some(None) if people is valid but doesn't contain Joe
//Some[Some[Person]] if Joe is found
people map (_ find (_.firstName == "joe"))
//flatten it to return None if people is None or Joe isn't found
//Some[Person] if Joe is found
people flatMap (_ find (_.firstName == "joe"))
//return Some(length) if the list isn't None and Joe is found
//otherwise return None
people flatMap (_ find (_.firstName == "joe")) map (fullNameLength)
The corresponding code with null checks (or even elvis ?: operators) would be painfully long. The real trick here is the flatMap operation, which allows for the nested comprehension of Options and collections in a way that nullable values can never achieve.
Since people seem to be missing it: null is ambiguous.
Alice's date-of-birth is null. What does it mean?
Bob's date-of-death is null. What does that mean?
A "reasonable" interpretation might be that Alice's date-of-birth exists but is unknown, whereas Bob's date-of-death does not exist (Bob is still alive). But why did we get to different answers?
Another problem: null is an edge case.
Is null = null?
Is nan = nan?
Is inf = inf?
Is +0 = -0?
Is +0/0 = -0/0?
The answers are usually "yes", "no", "yes", "yes", "no", "yes" respectively. Crazy "mathematicians" call NaN "nullity" and say it compares equal to itself. SQL treats nulls as not equal to anything (so they behave like NaNs). One wonders what happens when you try to store ±∞, ±0, and NaNs into the same database column (there are 253 NaNs, half of which are "negative").
To make matters worse, databases differ in how they treat NULL, and most of them aren't consistent (see NULL Handling in SQLite for an overview). It's pretty horrible.
And now for the obligatory story:
I recently designed a (sqlite3) database table with five columns a NOT NULL, b, id_a, id_b NOT NULL, timestamp. Because it's a generic schema designed to solve a generic problem for fairly arbitrary apps, there are two uniqueness constraints:
UNIQUE(a, b, id_a)
UNIQUE(a, b, id_b)
id_a only exists for compatibility with an existing app design (partly because I haven't come up with a better solution), and is not used in the new app. Because of the way NULL works in SQL, I can insert (1, 2, NULL, 3, t) and (1, 2, NULL, 4, t) and not violate the first uniqueness constraint (because (1, 2, NULL) != (1, 2, NULL)).
This works specifically because of how NULL works in a uniqueness constraint on most databases (presumably so it's easier to model "real-world" situations, e.g. no two people can have the same Social Security Number, but not all people have one).
FWIW, without first invoking undefined behaviour, C++ references cannot "point to" null, and it's not possible to construct a class with uninitialized reference member variables (if an exception is thrown, construction fails).
Sidenote: Occasionally you might want mutually-exclusive pointers (i.e. only one of them can be non-NULL), e.g. in a hypothetical iOS type DialogState = NotShown | ShowingActionSheet UIActionSheet | ShowingAlertView UIAlertView | Dismissed. Instead, I'm forced to do stuff like assert((bool)actionSheet + (bool)alertView == 1).
The undesirability of having having references/pointers be nullable by default.
I don't think this is the main issue with nulls, the main issue with nulls is that they can mean two things:
The reference/pointer is uninitialized: the problem here is the same as mutability in general. For one, it makes it more difficult to analyze your code.
The variable being null actually means something: this is the case which Option types actually formalize.
Languages which support Option types typically also forbid or discourage the use of uninitialized variables as well.
How option types work including strategies to ease checking null cases such as pattern matching.
In order to be effective, Option types need to be supported directly in the language. Otherwise it takes a lot of boiler-plate code to simulate them. Pattern-matching and type-inference are two keys language features making Option types easy to work with. For example:
In F#:
//first we create the option list, and then filter out all None Option types and
//map all Some Option types to their values. See how type-inference shines.
let optionList = [Some(1); Some(2); None; Some(3); None]
optionList |> List.choose id //evaluates to [1;2;3]
//here is a simple pattern-matching example
//which prints "1;2;None;3;None;".
//notice how value is extracted from op during the match
optionList
|> List.iter (function Some(value) -> printf "%i;" value | None -> printf "None;")
However, in a language like Java without direct support for Option types, we'd have something like:
//here we perform the same filter/map operation as in the F# example.
List<Option<Integer>> optionList = Arrays.asList(new Some<Integer>(1),new Some<Integer>(2),new None<Integer>(),new Some<Integer>(3),new None<Integer>());
List<Integer> filteredList = new ArrayList<Integer>();
for(Option<Integer> op : list)
if(op instanceof Some)
filteredList.add(((Some<Integer>)op).getValue());
Alternative solution such as message eating nil
Objective-C's "message eating nil" is not so much a solution as an attempt to lighten the head-ache of null checking. Basically, instead of throwing a runtime exception when trying to invoke a method on a null object, the expression instead evaluates to null itself. Suspending disbelief, it's as if each instance method begins with if (this == null) return null;. But then there is information loss: you don't know whether the method returned null because it is valid return value, or because the object is actually null. It's a lot like exception swallowing, and doesn't make any progress addressing the issues with null outlined before.
Assembly brought us addresses also known as untyped pointers. C mapped them directly as typed pointers but introduced Algol's null as a unique pointer value, compatible with all typed pointers. The big issue with null in C is that since every pointer can be null, one never can use a pointer safely without a manual check.
In higher-level languages, having null is awkward since it really conveys two distinct notions:
Telling that something is undefined.
Telling that something is optional.
Having undefined variables is pretty much useless, and yields to undefined behavior whenever they occur. I suppose everybody will agree that having things undefined should be avoided at all costs.
The second case is optionality and is best provided explicitly, for instance with an option type.
Let's say we're in a transport company and we need to create an application to help create a schedule for our drivers. For each driver, we store a few informations such as: the driving licences they have and the phone number to call in case of emergency.
In C we could have:
struct PhoneNumber { ... };
struct MotorbikeLicence { ... };
struct CarLicence { ... };
struct TruckLicence { ... };
struct Driver {
char name[32]; /* Null terminated */
struct PhoneNumber * emergency_phone_number;
struct MotorbikeLicence * motorbike_licence;
struct CarLicence * car_licence;
struct TruckLicence * truck_licence;
};
As you observe, in any processing over our list of drivers we'll have to check for null pointers. The compiler won't help you, the safety of the program relies on your shoulders.
In OCaml, the same code would look like this:
type phone_number = { ... }
type motorbike_licence = { ... }
type car_licence = { ... }
type truck_licence = { ... }
type driver = {
name: string;
emergency_phone_number: phone_number option;
motorbike_licence: motorbike_licence option;
car_licence: car_licence option;
truck_licence: truck_licence option;
}
Let's now say that we want to print the names of all the drivers along with their truck licence numbers.
In C:
#include <stdio.h>
void print_driver_with_truck_licence_number(struct Driver * driver) {
/* Check may be redundant but better be safe than sorry */
if (driver != NULL) {
printf("driver %s has ", driver->name);
if (driver->truck_licence != NULL) {
printf("truck licence %04d-%04d-%08d\n",
driver->truck_licence->area_code
driver->truck_licence->year
driver->truck_licence->num_in_year);
} else {
printf("no truck licence\n");
}
}
}
void print_drivers_with_truck_licence_numbers(struct Driver ** drivers, int nb) {
if (drivers != NULL && nb >= 0) {
int i;
for (i = 0; i < nb; ++i) {
struct Driver * driver = drivers[i];
if (driver) {
print_driver_with_truck_licence_number(driver);
} else {
/* Huh ? We got a null inside the array, meaning it probably got
corrupt somehow, what do we do ? Ignore ? Assert ? */
}
}
} else {
/* Caller provided us with erroneous input, what do we do ?
Ignore ? Assert ? */
}
}
In OCaml that would be:
open Printf
(* Here we are guaranteed to have a driver instance *)
let print_driver_with_truck_licence_number driver =
printf "driver %s has " driver.name;
match driver.truck_licence with
| None ->
printf "no truck licence\n"
| Some licence ->
(* Here we are guaranteed to have a licence *)
printf "truck licence %04d-%04d-%08d\n"
licence.area_code
licence.year
licence.num_in_year
(* Here we are guaranteed to have a valid list of drivers *)
let print_drivers_with_truck_licence_numbers drivers =
List.iter print_driver_with_truck_licence_number drivers
As you can see in this trivial example, there is nothing complicated in the safe version:
It's terser.
You get much better guarantees and no null check is required at all.
The compiler ensured that you correctly dealt with the option
Whereas in C, you could just have forgotten a null check and boom...
Note : these code samples where not compiled, but I hope you got the ideas.
Microsoft Research has a intersting project called
Spec#
It is a C# extension with not-null type and some mechanism to check your objects against not being null, although, IMHO, applying the design by contract principle may be more appropriate and more helpful for many troublesome situations caused by null references.
Robert Nystrom offers a nice article here:
http://journal.stuffwithstuff.com/2010/08/23/void-null-maybe-and-nothing/
describing his thought process when adding support for absence and failure to his Magpie programming language.
Coming from .NET background, I always thought null had a point, its useful. Until I came to know of structs and how easy it was working with them avoiding a lot of boilerplate code. Tony Hoare speaking at QCon London in 2009, apologized for inventing the null reference. To quote him:
I call it my billion-dollar mistake. It was the invention of the null
reference in 1965. At that time, I was designing the first
comprehensive type system for references in an object oriented
language (ALGOL W). My goal was to ensure that all use of references
should be absolutely safe, with checking performed automatically by
the compiler. But I couldn't resist the temptation to put in a null
reference, simply because it was so easy to implement. This has led to
innumerable errors, vulnerabilities, and system crashes, which have
probably caused a billion dollars of pain and damage in the last forty
years. In recent years, a number of program analysers like PREfix and
PREfast in Microsoft have been used to check references, and give
warnings if there is a risk they may be non-null. More recent
programming languages like Spec# have introduced declarations for
non-null references. This is the solution, which I rejected in 1965.
See this question too at programmers
I've always looked at Null (or nil) as being the absence of a value.
Sometimes you want this, sometimes you don't. It depends on the domain you are working with. If the absence is meaningful: no middle name, then your application can act accordingly. On the other hand if the null value should not be there: The first name is null, then the developer gets the proverbial 2 a.m. phone call.
I've also seen code overloaded and over-complicated with checks for null. To me this means one of two things:
a) a bug higher up in the application tree
b) bad/incomplete design
On the positive side - Null is probably one of the more useful notions for checking if something is absent, and languages without the concept of null will endup over-complicating things when it's time to do data validation. In this case, if a new variable is not initialized, said languagues will usually set variables to an empty string, 0, or an empty collection. However, if an empty string or 0 or empty collection are valid values for your application -- then you have a problem.
Sometimes this circumvented by inventing special/weird values for fields to represent an uninitialized state. But then what happens when the special value is entered by a well-intentioned user? And let's not get into the mess this will make of data validation routines.
If the language supported the null concept all the concerns would vanish.
Vector languages can sometimes get away with not having a null.
The empty vector serves as a typed null in this case.