Related
I'm currently trying to implement my own DynamicArray data type in Swift. To do so I'm using pointers a bit. As my root I'm using an UnsafeMutablePointer of a generic type T:
struct DynamicArray<T> {
private var root: UnsafeMutablePointer<T> = nil
private var capacity = 0 {
didSet {
//...
}
}
//...
init(capacity: Int) {
root = UnsafeMutablePointer<T>.alloc(capacity)
self.capacity = capacity
}
init(count: Int, repeatedValue: T) {
self.init(capacity: count)
for index in 0..<count {
(root + index).memory = repeatedValue
}
self.count = count
}
//...
}
Now as you can see I've also implemented a capacity property which tells me how much memory is currently allocated for root. Accordingly one can create an instance of DynamicArray using the init(capacity:) initializer, which allocates the appropriate amount of memory, and sets the capacity property.
But then I also implemented the init(count:repeatedValue:) initializer, which first allocates the needed memory using init(capacity: count). It then sets each segment in that part of memory to the repeatedValue.
When using the init(count:repeatedValue:) initializer with number types like Int, Double, or Float it works perfectly fine. Then using Character, or String though it crashes. It doesn't crash consistently though, but actually works sometimes, as can be seen here, by compiling a few times.
var a = DynamicArray<Character>(count: 5, repeatedValue: "A")
println(a.description) //prints [A, A, A, A, A]
//crashes most of the time
var b = DynamicArray<Int>(count: 5, repeatedValue: 1)
println(a.description) //prints [1, 1, 1, 1, 1]
//works consistently
Why is this happening? Does it have to do with String and Character holding values of different length?
Update #1:
Now #AirspeedVelocity addressed the problem with init(count:repeatedValue:). The DynamicArray contains another initializer though, which at first worked in a similar fashion as init(count:repeatedValue:). I changed it to work, as #AirspeedVelocity described for init(count:repeatedValue:) though:
init<C: CollectionType where C.Generator.Element == T, C.Index.Distance == Int>(collection: C) {
let collectionCount = countElements(collection)
self.init(capacity: collectionCount)
root.initializeFrom(collection)
count = collectionCount
}
I'm using the initializeFrom(source:) method as described here. And since collection conforms to CollectionType it should work fine.
I'm now getting this error though:
<stdin>:144:29: error: missing argument for parameter 'count' in call
root.initializeFrom(collection)
^
Is this just a misleading error message again?
Yes, chances are this doesn’t crash with basic inert types like integers but does with strings or arrays because they are more complex and allocate memory for themselves on creation/destruction.
The reason it’s crashing is that UnsafeMutablePointer memory needs to be initialized before it’s used (and similarly, needs to de-inited with destroy before it is deallocated).
So instead of assigning to the memory property, you should use the initialize method:
for index in 0..<count {
(root + index).initialize(repeatedValue)
}
Since initializing from another collection of values is so common, there’s another version of initialize that takes one. You could use that in conjunction with another helper struct, Repeat, that is a collection of the same value repeated multiple times:
init(count: Int, repeatedValue: T) {
self.init(capacity: count)
root.initializeFrom(Repeat(count: count, repeatedValue: repeatedValue))
self.count = count
}
However, there’s something else you need to be aware of which is that this code is currently inevitably going to leak memory. The reason being, you will need to destroy the contents and dealloc the pointed-to memory at some point before your DynamicArray struct is destroyed, otherwise you’ll leak. Since you can’t have a deinit in a struct, only a class, this won’t be possible to do automatically (this is assuming you aren’t expecting users of your array to do this themselves manually before it goes out of scope).
Additionally, if you want to implement value semantics (as with Array and String) via copy-on-write, you’ll also need a way of detecting if your internal buffer is being referenced multiple times. Take a look at ManagedBufferPointer to see a class that handles this for you.
I am implementing a custom binary predicate used by the thrust::max_element search algorithm. It also keeps track of a value of interest which the algorithm alone cannot yield. It looks something like this:
struct cmp_class{
cmp_class(int *val1){
this->val1 = val1;
}
bool operator()(int i, int j){
if (j < *val1) { *val1 = j; }
return i < j;
}
int *val1;
};
int val1 = ARRAY[0];
std::max_element(ARRAY, ARRAY+length, cmp(&val1));
.... use val1
My actual binary predicate is quite a bit more complex, but the example above captures the gist of it: I am passing a pointer to an integer to the binary predicate cmp, which then writes to that integer to keep track of some value (here the running minimum). Since max_element first calls cmp()(ARRAY[0],ARRAY[1]) and then cmp()(running maximum,ARRAY[i]), I only look at value j inside cmp and therefore initialize val1 = ARRAY[0] to ensure ARRAY[0] is taken into account.
If I do the above on the host, using std::max_element for example, this works fine. The values of val1 is what I expect given known data. However, using Thrust to execute this on the GPU, its value is off. I suspect this is due to the parallelization of thrust::max_element, which is recursively applied on sub arrays, the results of which form another array which thrust::max_element is run on, etc. Does this hold water?
In general, the binary predicates used for thrust reductions are expected to be commutative. I'm using "commutative" here to mean that the predicate result should be valid regardless of the order in which the arguments are presented.
At the initial stage of a thrust parallel reduction, the arguments are likely to be presented in an order you might expect (i.e. in the order of the vectors passed to the reduce function, or in the order that the values occur in a single vector.) However, later on in the parallel reduction process, the origin of the arguments may get mixed up, during the parallel-sweeping. Any binary comparison functor that assumes an ordering to the arguments is probably broken, for thrust parallel reductions.
In your case, the boolean result of your comparison functor should be valid regardless of the order of arguments presented, and in that respect it appears to be properly commutative.
However, regarding the custom storage functionality you have built-in around val1, it seems pretty clear that the results in val1 could be different depending on the order in which arguments are passed to the functor. Consider this simple max-finding sequence amongst a set values passed to the functor as (i,j) (assume val1 starts out at a large value):
values: 10 5 3 7 12
comparison pairs: (10,5) (10,3) (10,7) (10,12)
comparison result: 10 10 10 12
val1 storage: 5 3 3 3
Now suppose that we simply reverse the order that arguments are presented to the functor:
values: 10 5 3 7 12
comparison pairs: (5,10) (3,10) (7,10) (12,10)
comparison result: 10 10 10 12
val1 storage: 10 10 10 10
Another issue is that you have no atomic protection on val1 that I can see:
if (j < *val1) { *val1 = j; }
The above line of code may be OK in a serial realization. In a parallel multi-threaded algorithm, you have the possibility for multiple threads to be accessing (reading and writing) *val1 simultaneously, which will have undefined results.
In a program I need to efficiently answer queries of the following form:
Given a set of strings A and a query string q return all s ∈ A such that q is a subsequence of s
For example, given A = {"abcdef", "aaaaaa", "ddca"} and q = "acd" exactly "abcdef" should be returned.
The following is what I have considered considered so far:
For each possible character, make a sorted list of all string/locations where it appears. For querying interleave the lists of the involved characters, and scan through it looking for matches within string boundaries.
This would probably be more efficient for words instead of characters, since the limited number of different characters will make the return lists very dense.
For each n-prefix q might have, store the list of all matching strings. n might realistically be close to 3. For query strings longer than that we brute force the initial list.
This might speed things up a bit, but one could easily imagine some n-subsequences being present close to all strings in A, which means worst case is the same as just brute forcing the entire set.
Do you know of any data structures, algorithms or preprocessing tricks which might be helpful for performing the above task efficiently for large As? (My ss will be around 100 characters)
Update: Some people have suggested using LCS to check if q is a subsequence of s. I just want to remind that this can be done using a simple function such as:
def isSub(q,s):
i, j = 0, 0
while i != len(q) and j != len(s):
if q[i] == s[j]:
i += 1
j += 1
else:
j += 1
return i == len(q)
Update 2: I've been asked to give more details on the nature of q, A and its elements. While I'd prefer something that works as generally as possible, I assume A will have length around 10^6 and will need to support insertion. The elements s will be shorter with an average length of 64. The queries q will only be 1 to 20 characters and be used for a live search, so the query "ab" will be sent just before the query "abc". Again, I'd much prefer the solution to use the above as little as possible.
Update 3: It has occurred to me, that a data-structure with O(n^{1-epsilon}) lookups, would allow you to solve OVP / disprove the SETH conjecture. That is probably the reason for our suffering. The only options are then to disprove the conjecture, use approximation, or take advantage of the dataset. I imagine quadlets and tries would do the last in different settings.
It could done by building an automaton. You can start with NFA (nondeterministic finite automaton which is like an indeterministic directed graph) which allows edges labeled with an epsilon character, which means that during processing you can jump from one node to another without consuming any character. I'll try to reduce your A. Let's say you A is:
A = {'ab, 'bc'}
If you build NFA for ab string you should get something like this:
+--(1)--+
e | a| |e
(S)--+--(2)--+--(F)
| b| |
+--(3)--+
Above drawing is not the best looking automaton. But there are a few points to consider:
S state is the starting state and F is the ending state.
If you are at F state it means your string qualifies as a subsequence.
The rule of propagating within an autmaton is that you can consume e (epsilon) to jump forward, therefore you can be at more then one state at each point in time. This is called e closure.
Now if given b, starting at state S I can jump one epsilon, reach 2, and consume b and reach 3. Now given end string I consume epsilon and reach F, thus b qualifies as a sub-sequence of ab. So does a or ab you can try yourself using above automata.
The good thing about NFA is that they have one start state and one final state. Two NFA could be easily connected using epsilons. There are various algorithms that could help you to convert NFA to DFA. DFA is a directed graph which can follow precise path given a character -- in particular, it is always in exactly one state at any point in time. (For any NFA, there is a corresponding DFA whose states correspond to sets of states in the NFA.)
So, for A = {'ab, 'bc'}, we would need to build NFA for ab then NFA for bc then join the two NFAs and build the DFA of the entire big NFA.
EDIT
NFA of subsequence of abc would be a?b?c?, so you can build your NFA as:
Now, consider the input acd. To query if ab is subsequence of {'abc', 'acd'}, you can use this NFA: (a?b?c?)|(a?c?d). Once you have NFA you can convert it to DFA where each state will contain whether it is a subsequence of abc or acd or maybe both.
I used link below to make NFA graphic from regular expression:
http://hackingoff.com/images/re2nfa/2013-08-04_21-56-03_-0700-nfa.svg
EDIT 2
You're right! In case if you've 10,000 unique characters in the A. By unique I mean A is something like this: {'abc', 'def'} i.e. intersection of each element of A is empty set. Then your DFA would be worst case in terms of states i.e. 2^10000. But I'm not sure when would that be possible given that there can never be 10,000 unique characters. Even if you have 10,000 characters in A still there will be repetitions and that might reduce states alot since e-closure might eventually merge. I cannot really estimate how much it might reduce. But even having 10 million states, you will only consume less then 10 mb worth of space to construct a DFA. You can even use NFA and find e-closures at run-time but that would add to run-time complexity. You can search different papers on how large regex are converted to DFAs.
EDIT 3
For regex (a?b?c?)|(e?d?a?)|(a?b?m?)
If you convert above NFA to DFA you get:
It actually lot less states then NFA.
Reference:
http://hackingoff.com/compilers/regular-expression-to-nfa-dfa
EDIT 4
After fiddling with that website more. I found that worst case would be something like this A = {'aaaa', 'bbbbb', 'cccc' ....}. But even in this case states are lesser than NFA states.
Tests
There have been four main proposals in this thread:
Shivam Kalra suggested creating an automaton based on all the strings in A. This approach has been tried slightly in the literature, normally under the name "Directed Acyclic Subsequence Graph" (DASG).
J Random Hacker suggested extending my 'prefix list' idea to all 'n choose 3' triplets in the query string, and merging them all using a heap.
In the note "Efficient Subsequence Search in Databases" Rohit Jain, Mukesh K. Mohania and Sunil Prabhakar suggest using a Trie structure with some optimizations and recursively search the tree for the query. They also have a suggestion similar to the triplet idea.
Finally there is the 'naive' approach, which wanghq suggested optimizing by storing an index for each element of A.
To get a better idea of what's worth putting continued effort into, I have implemented the above four approaches in Python and benchmarked them on two sets of data. The implementations could all be made a couple of magnitudes faster with a well done implementation in C or Java; and I haven't included the optimizations suggested for the 'trie' and 'naive' versions.
Test 1
A consists of random paths from my filesystem. q are 100 random [a-z] strings of average length 7. As the alphabet is large (and Python is slow) I was only able to use duplets for method 3.
Construction times in seconds as a function of A size:
Query times in seconds as a function of A size:
Test 2
A consists of randomly sampled [a-b] strings of length 20. q are 100 random [a-b] strings of average length 7. As the alphabet is small we can use quadlets for method 3.
Construction times in seconds as a function of A size:
Query times in seconds as a function of A size:
Conclusions
The double logarithmic plot is a bit hard to read, but from the data we can draw the following conclusions:
Automatons are very fast at querying (constant time), however they are impossible to create and store for |A| >= 256. It might be possible that a closer analysis could yield a better time/memory balance, or some tricks applicable for the remaining methods.
The dup-/trip-/quadlet method is about twice as fast as my trie implementation and four times as fast as the 'naive' implementation. I used only a linear amount of lists for the merge, instead of n^3 as suggested by j_random_hacker. It might be possible to tune the method better, but in general it was disappointing.
My trie implementation consistently does better than the naive approach by around a factor of two. By incorporating more preprocessing (like "where are the next 'c's in this subtree") or perhaps merging it with the triplet method, this seems like todays winner.
If you can do with a magnitude less performance, the naive method does comparatively just fine for very little cost.
As you point out, it might be that all strings in A contain q as a subsequence, in which case you can't hope to do better than O(|A|). (That said, you might still be able to do better than the time taken to run LCS on (q, A[i]) for each string i in A, but I won't focus on that here.)
TTBOMK there are no magic, fast ways to answer this question (in the way that suffix trees are the magic, fast way to answer the corresponding question involving substrings instead of subsequences). Nevertheless if you expect the set of answers for most queries to be small on average then it's worth looking at ways to speed up these queries (the ones yielding small-size answers).
I suggest filtering based on a generalisation of your heuristic (2): if some database sequence A[i] contains q as a subsequence, then it must also contain every subsequence of q. (The reverse direction is not true unfortunately!) So for some small k, e.g. 3 as you suggest, you can preprocess by building an array of lists telling you, for every length-k string s, the list of database sequences containing s as a subsequence. I.e. c[s] will contain a list of the ID numbers of database sequences containing s as a subsequence. Keep each list in numeric order to enable fast intersections later.
Now the basic idea (which we'll improve in a moment) for each query q is: Find all k-sized subsequences of q, look up each in the array of lists c[], and intersect these lists to find the set of sequences in A that might possibly contain q as a subsequence. Then for each possible sequence A[i] in this (hopefully small) intersection, perform an O(n^2) LCS calculation with q to see whether it really does contain q.
A few observations:
The intersection of 2 sorted lists of size m and n can be found in O(m+n) time. To find the intersection of r lists, perform r-1 pairwise intersections in any order. Since taking intersections can only produce sets that are smaller or of the same size, time can be saved by intersecting the smallest pair of lists first, then the next smallest pair (this will necessarily include the result of the first operation), and so on. In particular: sort lists in increasing size order, then always intersect the next list with the "current" intersection.
It is actually faster to find the intersection a different way, by adding the first element (sequence number) of each of the r lists into a heap data structure, then repeatedly pulling out the minimum value and replenishing the heap with the next value from the list that the most recent minimum came from. This will produce a list of sequence numbers in nondecreasing order; any value that appears fewer than r times in a row can be discarded, since it cannot be a member of all r sets.
If a k-string s has only a few sequences in c[s], then it is in some sense discriminating. For most datasets, not all k-strings will be equally discriminating, and this can be used to our advantage. After preprocessing, consider throwing away all lists having more than some fixed number (or some fixed fraction of the total) of sequences, for 3 reasons:
They take a lot of space to store
They take a lot of time to intersect during query processing
Intersecting them will usually not shrink the overall intersection much
It is not necessary to consider every k-subsequence of q. Although this will produce the smallest intersection, it involves merging (|q| choose k) lists, and it might well be possible to produce an intersection that is nearly as small using just a fraction of these k-subsequences. E.g. you could limit yourself to trying all (or a few) k-substrings of q. As a further filter, consider just those k-subsequences whose sequence lists in c[s] are below some value. (Note: if your threshold is the same for every query, you might as well delete all such lists from the database instead, since this will have the same effect, and saves space.)
One thought;
if q tends to be short, maybe reducing A and q to a set will help?
So for the example, derive to { (a,b,c,d,e,f), (a), (a,c,d) }. Looking up possible candidates for any q should be faster than the original problem (that's a guess actually, not sure how exactly. maybe sort them and "group" similar ones in bloom filters?), then use bruteforce to weed out false positives.
If A strings are lengthy, you could make the characters unique based on their occurence, so that would be {(a1,b1,c1,d1,e1,f1),(a1,a2,a3,a4,a5,a6),(a1,c1,d1,d2)}. This is fine, because if you search for "ddca" you only want to match the second d to a second d. The size of your alphabet would go up (bad for bloom or bitmap style operations) and would be different ever time you get new A's, but the amount of false positives would go down.
First let me make sure my understanding/abstraction is correct. The following two requirements should be met:
if A is a subsequence of B, then all characters in A should appear in B.
for those characters in B, their positions should be in an ascending order.
Note that, a char in A might appear more than once in B.
To solve 1), a map/set can be used. The key is the character in string B, and the value doesn't matter.
To solve 2), we need to maintain the position of each characters. Since a character might appear more than once, the position should be a collection.
So the structure is like:
Map<Character, List<Integer>)
e.g.
abcdefab
a: [0, 6]
b: [1, 7]
c: [2]
d: [3]
e: [4]
f: [5]
Once we have the structure, how to know if the characters are in the right order as they are in string A? If B is acd, we should check the a at position 0 (but not 6), c at position 2 and d at position 3.
The strategy here is to choose the position that's after and close to the previous chosen position. TreeSet is a good candidate for this operation.
public E higher(E e)
Returns the least element in this set strictly greater than the given element, or null if there is no such element.
The runtime complexity is O(s * (n1 + n2)*log(m))).
s: number of strings in the set
n1: number of chars in string (B)
n2: number of chars in query string (A)
m: number of duplicates in string (B), e.g. there are 5 a.
Below is the implementation with some test data.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.TreeSet;
public class SubsequenceStr {
public static void main(String[] args) {
String[] testSet = new String[] {
"abcdefgh", //right one
"adcefgh", //has all chars, but not the right order
"bcdefh", //missing one char
"", //empty
"acdh",//exact match
"acd",
"acdehacdeh"
};
List<String> subseqenceStrs = subsequenceStrs(testSet, "acdh");
for (String str : subseqenceStrs) {
System.out.println(str);
}
//duplicates in query
subseqenceStrs = subsequenceStrs(testSet, "aa");
for (String str : subseqenceStrs) {
System.out.println(str);
}
subseqenceStrs = subsequenceStrs(testSet, "aaa");
for (String str : subseqenceStrs) {
System.out.println(str);
}
}
public static List<String> subsequenceStrs(String[] strSet, String q) {
System.out.println("find strings whose subsequence string is " + q);
List<String> results = new ArrayList<String>();
for (String str : strSet) {
char[] chars = str.toCharArray();
Map<Character, TreeSet<Integer>> charPositions = new HashMap<Character, TreeSet<Integer>>();
for (int i = 0; i < chars.length; i++) {
TreeSet<Integer> positions = charPositions.get(chars[i]);
if (positions == null) {
positions = new TreeSet<Integer>();
charPositions.put(chars[i], positions);
}
positions.add(i);
}
char[] qChars = q.toCharArray();
int lowestPosition = -1;
boolean isSubsequence = false;
for (int i = 0; i < qChars.length; i++) {
TreeSet<Integer> positions = charPositions.get(qChars[i]);
if (positions == null || positions.size() == 0) {
break;
} else {
Integer position = positions.higher(lowestPosition);
if (position == null) {
break;
} else {
lowestPosition = position;
if (i == qChars.length - 1) {
isSubsequence = true;
}
}
}
}
if (isSubsequence) {
results.add(str);
}
}
return results;
}
}
Output:
find strings whose subsequence string is acdh
abcdefgh
acdh
acdehacdeh
find strings whose subsequence string is aa
acdehacdeh
find strings whose subsequence string is aaa
As always, I might be totally wrong :)
You might want to have a look into the Book Algorithms on Strings and Sequences by Dan Gusfield. As it turns out part of it is available on the internet. You might also want to read Gusfield's Introduction to Suffix Trees. As it turns out this book covers many approaches for you kind of question. It is considered one of the standard publications in this field.
Get a fast longest common subsequence algorithm implementation. Actually it suffices to determine the length of the LCS. Notice that Gusman's book has very good algorithms and also points to more sources for such algorithms.
Return all s ∈ A with length(LCS(s,q)) == length(q)
Every so often when programmers are complaining about null errors/exceptions someone asks what we do without null.
I have some basic idea of the coolness of option types, but I don't have the knowledge or languages skill to best express it. What is a great explanation of the following written in a way approachable to the average programmer that we could point that person towards?
The undesirability of having references/pointers be nullable by default
How option types work including strategies to ease checking null cases such as
pattern matching and
monadic comprehensions
Alternative solution such as message eating nil
(other aspects I missed)
I think the succinct summary of why null is undesirable is that meaningless states should not be representable.
Suppose I'm modeling a door. It can be in one of three states: open, shut but unlocked, and shut and locked. Now I could model it along the lines of
class Door
private bool isShut
private bool isLocked
and it is clear how to map my three states into these two boolean variables. But this leaves a fourth, undesired state available: isShut==false && isLocked==true. Because the types I have selected as my representation admit this state, I must expend mental effort to ensure that the class never gets into this state (perhaps by explicitly coding an invariant). In contrast, if I were using a language with algebraic data types or checked enumerations that lets me define
type DoorState =
| Open | ShutAndUnlocked | ShutAndLocked
then I could define
class Door
private DoorState state
and there are no more worries. The type system will ensure that there are only three possible states for an instance of class Door to be in. This is what type systems are good at - explicitly ruling out a whole class of errors at compile-time.
The problem with null is that every reference type gets this extra state in its space that is typically undesired. A string variable could be any sequence of characters, or it could be this crazy extra null value that doesn't map into my problem domain. A Triangle object has three Points, which themselves have X and Y values, but unfortunately the Points or the Triangle itself might be this crazy null value that is meaningless to the graphing domain I'm working in. Etc.
When you do intend to model a possibly-non-existent value, then you should opt into it explicitly. If the way I intend to model people is that every Person has a FirstName and a LastName, but only some people have MiddleNames, then I would like to say something like
class Person
private string FirstName
private Option<string> MiddleName
private string LastName
where string here is assumed to be a non-nullable type. Then there are no tricky invariants to establish and no unexpected NullReferenceExceptions when trying to compute the length of someone's name. The type system ensures that any code dealing with the MiddleName accounts for the possibility of it being None, whereas any code dealing with the FirstName can safely assume there is a value there.
So for example, using the type above, we could author this silly function:
let TotalNumCharsInPersonsName(p:Person) =
let middleLen = match p.MiddleName with
| None -> 0
| Some(s) -> s.Length
p.FirstName.Length + middleLen + p.LastName.Length
with no worries. In contrast, in a language with nullable references for types like string, then assuming
class Person
private string FirstName
private string MiddleName
private string LastName
you end up authoring stuff like
let TotalNumCharsInPersonsName(p:Person) =
p.FirstName.Length + p.MiddleName.Length + p.LastName.Length
which blows up if the incoming Person object does not have the invariant of everything being non-null, or
let TotalNumCharsInPersonsName(p:Person) =
(if p.FirstName=null then 0 else p.FirstName.Length)
+ (if p.MiddleName=null then 0 else p.MiddleName.Length)
+ (if p.LastName=null then 0 else p.LastName.Length)
or maybe
let TotalNumCharsInPersonsName(p:Person) =
p.FirstName.Length
+ (if p.MiddleName=null then 0 else p.MiddleName.Length)
+ p.LastName.Length
assuming that p ensures first/last are there but middle can be null, or maybe you do checks that throw different types of exceptions, or who knows what. All these crazy implementation choices and things to think about crop up because there's this stupid representable-value that you don't want or need.
Null typically adds needless complexity. Complexity is the enemy of all software, and you should strive to reduce complexity whenever reasonable.
(Note well that there is more complexity to even these simple examples. Even if a FirstName cannot be null, a string can represent "" (the empty string), which is probably also not a person name that we intend to model. As such, even with non-nullable strings, it still might be the case that we are "representing meaningless values". Again, you could choose to battle this either via invariants and conditional code at runtime, or by using the type system (e.g. to have a NonEmptyString type). The latter is perhaps ill-advised ("good" types are often "closed" over a set of common operations, and e.g. NonEmptyString is not closed over .SubString(0,0)), but it demonstrates more points in the design space. At the end of the day, in any given type system, there is some complexity it will be very good at getting rid of, and other complexity that is just intrinsically harder to get rid of. The key for this topic is that in nearly every type system, the change from "nullable references by default" to "non-nullable references by default" is nearly always a simple change that makes the type system a great deal better at battling complexity and ruling out certain types of errors and meaningless states. So it is pretty crazy that so many languages keep repeating this error again and again.)
The nice thing about option types isn't that they're optional. It is that all other types aren't.
Sometimes, we need to be able to represent a kind of "null" state. Sometimes we have to represent a "no value" option as well as the other possible values a variable may take. So a language that flat out disallows this is going to be a bit crippled.
But often, we don't need it, and allowing such a "null" state only leads to ambiguity and confusion: every time I access a reference type variable in .NET, I have to consider that it might be null.
Often, it will never actually be null, because the programmer structures the code so that it can never happen. But the compiler can't verify that, and every single time you see it, you have to ask yourself "can this be null? Do I need to check for null here?"
Ideally, in the many cases where null doesn't make sense, it shouldn't be allowed.
That's tricky to achieve in .NET, where nearly everything can be null. You have to rely on the author of the code you're calling to be 100% disciplined and consistent and have clearly documented what can and cannot be null, or you have to be paranoid and check everything.
However, if types aren't nullable by default, then you don't need to check whether or not they're null. You know they can never be null, because the compiler/type checker enforces that for you.
And then we just need a back door for the rare cases where we do need to handle a null state. Then an "option" type can be used. Then we allow null in the cases where we've made a conscious decision that we need to be able to represent the "no value" case, and in every other case, we know that the value will never be null.
As others have mentioned, in C# or Java for example, null can mean one of two things:
the variable is uninitialized. This should, ideally, never happen. A variable shouldn't exist unless it is initialized.
the variable contains some "optional" data: it needs to be able to represent the case where there is no data. This is sometimes necessary. Perhaps you're trying to find an object in a list, and you don't know in advance whether or not it's there. Then we need to be able to represent that "no object was found".
The second meaning has to be preserved, but the first one should be eliminated entirely. And even the second meaning should not be the default. It's something we can opt in to if and when we need it. But when we don't need something to be optional, we want the type checker to guarantee that it will never be null.
All of the answers so far focus on why null is a bad thing, and how it's kinda handy if a language can guarantee that certain values will never be null.
They then go on to suggest that it would be a pretty neat idea if you enforce non-nullability for all values, which can be done if you add a concept like Option or Maybe to represent types that may not always have a defined value. This is the approach taken by Haskell.
It's all good stuff! But it doesn't preclude the use of explicitly nullable / non-null types to achieve the same effect. Why, then, is Option still a good thing? After all, Scala supports nullable values (is has to, so it can work with Java libraries) but supports Options as well.
Q. So what are the benefits beyond being able to remove nulls from a language entirely?
A. Composition
If you make a naive translation from null-aware code
def fullNameLength(p:Person) = {
val middleLen =
if (null == p.middleName)
p.middleName.length
else
0
p.firstName.length + middleLen + p.lastName.length
}
to option-aware code
def fullNameLength(p:Person) = {
val middleLen = p.middleName match {
case Some(x) => x.length
case _ => 0
}
p.firstName.length + middleLen + p.lastName.length
}
there's not much difference! But it's also a terrible way to use Options... This approach is much cleaner:
def fullNameLength(p:Person) = {
val middleLen = p.middleName map {_.length} getOrElse 0
p.firstName.length + middleLen + p.lastName.length
}
Or even:
def fullNameLength(p:Person) =
p.firstName.length +
p.middleName.map{length}.getOrElse(0) +
p.lastName.length
When you start dealing with List of Options, it gets even better. Imagine that the List people is itself optional:
people flatMap(_ find (_.firstName == "joe")) map (fullNameLength)
How does this work?
//convert an Option[List[Person]] to an Option[S]
//where the function f takes a List[Person] and returns an S
people map f
//find a person named "Joe" in a List[Person].
//returns Some[Person], or None if "Joe" isn't in the list
validPeopleList find (_.firstName == "joe")
//returns None if people is None
//Some(None) if people is valid but doesn't contain Joe
//Some[Some[Person]] if Joe is found
people map (_ find (_.firstName == "joe"))
//flatten it to return None if people is None or Joe isn't found
//Some[Person] if Joe is found
people flatMap (_ find (_.firstName == "joe"))
//return Some(length) if the list isn't None and Joe is found
//otherwise return None
people flatMap (_ find (_.firstName == "joe")) map (fullNameLength)
The corresponding code with null checks (or even elvis ?: operators) would be painfully long. The real trick here is the flatMap operation, which allows for the nested comprehension of Options and collections in a way that nullable values can never achieve.
Since people seem to be missing it: null is ambiguous.
Alice's date-of-birth is null. What does it mean?
Bob's date-of-death is null. What does that mean?
A "reasonable" interpretation might be that Alice's date-of-birth exists but is unknown, whereas Bob's date-of-death does not exist (Bob is still alive). But why did we get to different answers?
Another problem: null is an edge case.
Is null = null?
Is nan = nan?
Is inf = inf?
Is +0 = -0?
Is +0/0 = -0/0?
The answers are usually "yes", "no", "yes", "yes", "no", "yes" respectively. Crazy "mathematicians" call NaN "nullity" and say it compares equal to itself. SQL treats nulls as not equal to anything (so they behave like NaNs). One wonders what happens when you try to store ±∞, ±0, and NaNs into the same database column (there are 253 NaNs, half of which are "negative").
To make matters worse, databases differ in how they treat NULL, and most of them aren't consistent (see NULL Handling in SQLite for an overview). It's pretty horrible.
And now for the obligatory story:
I recently designed a (sqlite3) database table with five columns a NOT NULL, b, id_a, id_b NOT NULL, timestamp. Because it's a generic schema designed to solve a generic problem for fairly arbitrary apps, there are two uniqueness constraints:
UNIQUE(a, b, id_a)
UNIQUE(a, b, id_b)
id_a only exists for compatibility with an existing app design (partly because I haven't come up with a better solution), and is not used in the new app. Because of the way NULL works in SQL, I can insert (1, 2, NULL, 3, t) and (1, 2, NULL, 4, t) and not violate the first uniqueness constraint (because (1, 2, NULL) != (1, 2, NULL)).
This works specifically because of how NULL works in a uniqueness constraint on most databases (presumably so it's easier to model "real-world" situations, e.g. no two people can have the same Social Security Number, but not all people have one).
FWIW, without first invoking undefined behaviour, C++ references cannot "point to" null, and it's not possible to construct a class with uninitialized reference member variables (if an exception is thrown, construction fails).
Sidenote: Occasionally you might want mutually-exclusive pointers (i.e. only one of them can be non-NULL), e.g. in a hypothetical iOS type DialogState = NotShown | ShowingActionSheet UIActionSheet | ShowingAlertView UIAlertView | Dismissed. Instead, I'm forced to do stuff like assert((bool)actionSheet + (bool)alertView == 1).
The undesirability of having having references/pointers be nullable by default.
I don't think this is the main issue with nulls, the main issue with nulls is that they can mean two things:
The reference/pointer is uninitialized: the problem here is the same as mutability in general. For one, it makes it more difficult to analyze your code.
The variable being null actually means something: this is the case which Option types actually formalize.
Languages which support Option types typically also forbid or discourage the use of uninitialized variables as well.
How option types work including strategies to ease checking null cases such as pattern matching.
In order to be effective, Option types need to be supported directly in the language. Otherwise it takes a lot of boiler-plate code to simulate them. Pattern-matching and type-inference are two keys language features making Option types easy to work with. For example:
In F#:
//first we create the option list, and then filter out all None Option types and
//map all Some Option types to their values. See how type-inference shines.
let optionList = [Some(1); Some(2); None; Some(3); None]
optionList |> List.choose id //evaluates to [1;2;3]
//here is a simple pattern-matching example
//which prints "1;2;None;3;None;".
//notice how value is extracted from op during the match
optionList
|> List.iter (function Some(value) -> printf "%i;" value | None -> printf "None;")
However, in a language like Java without direct support for Option types, we'd have something like:
//here we perform the same filter/map operation as in the F# example.
List<Option<Integer>> optionList = Arrays.asList(new Some<Integer>(1),new Some<Integer>(2),new None<Integer>(),new Some<Integer>(3),new None<Integer>());
List<Integer> filteredList = new ArrayList<Integer>();
for(Option<Integer> op : list)
if(op instanceof Some)
filteredList.add(((Some<Integer>)op).getValue());
Alternative solution such as message eating nil
Objective-C's "message eating nil" is not so much a solution as an attempt to lighten the head-ache of null checking. Basically, instead of throwing a runtime exception when trying to invoke a method on a null object, the expression instead evaluates to null itself. Suspending disbelief, it's as if each instance method begins with if (this == null) return null;. But then there is information loss: you don't know whether the method returned null because it is valid return value, or because the object is actually null. It's a lot like exception swallowing, and doesn't make any progress addressing the issues with null outlined before.
Assembly brought us addresses also known as untyped pointers. C mapped them directly as typed pointers but introduced Algol's null as a unique pointer value, compatible with all typed pointers. The big issue with null in C is that since every pointer can be null, one never can use a pointer safely without a manual check.
In higher-level languages, having null is awkward since it really conveys two distinct notions:
Telling that something is undefined.
Telling that something is optional.
Having undefined variables is pretty much useless, and yields to undefined behavior whenever they occur. I suppose everybody will agree that having things undefined should be avoided at all costs.
The second case is optionality and is best provided explicitly, for instance with an option type.
Let's say we're in a transport company and we need to create an application to help create a schedule for our drivers. For each driver, we store a few informations such as: the driving licences they have and the phone number to call in case of emergency.
In C we could have:
struct PhoneNumber { ... };
struct MotorbikeLicence { ... };
struct CarLicence { ... };
struct TruckLicence { ... };
struct Driver {
char name[32]; /* Null terminated */
struct PhoneNumber * emergency_phone_number;
struct MotorbikeLicence * motorbike_licence;
struct CarLicence * car_licence;
struct TruckLicence * truck_licence;
};
As you observe, in any processing over our list of drivers we'll have to check for null pointers. The compiler won't help you, the safety of the program relies on your shoulders.
In OCaml, the same code would look like this:
type phone_number = { ... }
type motorbike_licence = { ... }
type car_licence = { ... }
type truck_licence = { ... }
type driver = {
name: string;
emergency_phone_number: phone_number option;
motorbike_licence: motorbike_licence option;
car_licence: car_licence option;
truck_licence: truck_licence option;
}
Let's now say that we want to print the names of all the drivers along with their truck licence numbers.
In C:
#include <stdio.h>
void print_driver_with_truck_licence_number(struct Driver * driver) {
/* Check may be redundant but better be safe than sorry */
if (driver != NULL) {
printf("driver %s has ", driver->name);
if (driver->truck_licence != NULL) {
printf("truck licence %04d-%04d-%08d\n",
driver->truck_licence->area_code
driver->truck_licence->year
driver->truck_licence->num_in_year);
} else {
printf("no truck licence\n");
}
}
}
void print_drivers_with_truck_licence_numbers(struct Driver ** drivers, int nb) {
if (drivers != NULL && nb >= 0) {
int i;
for (i = 0; i < nb; ++i) {
struct Driver * driver = drivers[i];
if (driver) {
print_driver_with_truck_licence_number(driver);
} else {
/* Huh ? We got a null inside the array, meaning it probably got
corrupt somehow, what do we do ? Ignore ? Assert ? */
}
}
} else {
/* Caller provided us with erroneous input, what do we do ?
Ignore ? Assert ? */
}
}
In OCaml that would be:
open Printf
(* Here we are guaranteed to have a driver instance *)
let print_driver_with_truck_licence_number driver =
printf "driver %s has " driver.name;
match driver.truck_licence with
| None ->
printf "no truck licence\n"
| Some licence ->
(* Here we are guaranteed to have a licence *)
printf "truck licence %04d-%04d-%08d\n"
licence.area_code
licence.year
licence.num_in_year
(* Here we are guaranteed to have a valid list of drivers *)
let print_drivers_with_truck_licence_numbers drivers =
List.iter print_driver_with_truck_licence_number drivers
As you can see in this trivial example, there is nothing complicated in the safe version:
It's terser.
You get much better guarantees and no null check is required at all.
The compiler ensured that you correctly dealt with the option
Whereas in C, you could just have forgotten a null check and boom...
Note : these code samples where not compiled, but I hope you got the ideas.
Microsoft Research has a intersting project called
Spec#
It is a C# extension with not-null type and some mechanism to check your objects against not being null, although, IMHO, applying the design by contract principle may be more appropriate and more helpful for many troublesome situations caused by null references.
Robert Nystrom offers a nice article here:
http://journal.stuffwithstuff.com/2010/08/23/void-null-maybe-and-nothing/
describing his thought process when adding support for absence and failure to his Magpie programming language.
Coming from .NET background, I always thought null had a point, its useful. Until I came to know of structs and how easy it was working with them avoiding a lot of boilerplate code. Tony Hoare speaking at QCon London in 2009, apologized for inventing the null reference. To quote him:
I call it my billion-dollar mistake. It was the invention of the null
reference in 1965. At that time, I was designing the first
comprehensive type system for references in an object oriented
language (ALGOL W). My goal was to ensure that all use of references
should be absolutely safe, with checking performed automatically by
the compiler. But I couldn't resist the temptation to put in a null
reference, simply because it was so easy to implement. This has led to
innumerable errors, vulnerabilities, and system crashes, which have
probably caused a billion dollars of pain and damage in the last forty
years. In recent years, a number of program analysers like PREfix and
PREfast in Microsoft have been used to check references, and give
warnings if there is a risk they may be non-null. More recent
programming languages like Spec# have introduced declarations for
non-null references. This is the solution, which I rejected in 1965.
See this question too at programmers
I've always looked at Null (or nil) as being the absence of a value.
Sometimes you want this, sometimes you don't. It depends on the domain you are working with. If the absence is meaningful: no middle name, then your application can act accordingly. On the other hand if the null value should not be there: The first name is null, then the developer gets the proverbial 2 a.m. phone call.
I've also seen code overloaded and over-complicated with checks for null. To me this means one of two things:
a) a bug higher up in the application tree
b) bad/incomplete design
On the positive side - Null is probably one of the more useful notions for checking if something is absent, and languages without the concept of null will endup over-complicating things when it's time to do data validation. In this case, if a new variable is not initialized, said languagues will usually set variables to an empty string, 0, or an empty collection. However, if an empty string or 0 or empty collection are valid values for your application -- then you have a problem.
Sometimes this circumvented by inventing special/weird values for fields to represent an uninitialized state. But then what happens when the special value is entered by a well-intentioned user? And let's not get into the mess this will make of data validation routines.
If the language supported the null concept all the concerns would vanish.
Vector languages can sometimes get away with not having a null.
The empty vector serves as a typed null in this case.
I need some help understanding how stdext::hash_multimap's lower_bound, upper_bound and equal_range work (at least the VS2005 version of it).
I have the following code (summarized for the question)
#include <hash_map>
using stdext::hash_multimap;
using std::greater;
using stdext::hash_compare;
using std::pair;
using std::cout;
typedef hash_multimap < double, CComBSTR, hash_compare< double, greater<double> > > HMM;
HMM hm1;
HMM :: const_iterator it1, it2;
pair<HMM::const_iterator, HMM::const_iterator> pairHMM;
typedef pair <double, CComBSTR> PairDblStr;
// inserting only two values for sample
hm1.insert ( PairDblStr ( 0.224015748, L"#1-64" ) );
hm1.insert ( PairDblStr ( 0.215354331, L"#1-72" ) );
// Using a double value in between the inserted key values to find one of the elements in the map
it1 = hm1.lower_bound( 0.2175 );
if( it1 == hm1.end() )
{
cout << "lower_bound failed\n";
}
it1 = hm1.upper_bound( 0.2175 );
if( it1 == hm1.end() )
{
cout << "upper_bound failed\n";
}
pairHMM = hm1.equal_range( 0.2175 );
if( ( pairHMM.first == hm1.end() ) && ( pairHMM.second == hm1.end() ) )
{
cout << "equal_range failed\n";
}
As mentioned in the comment I am passing in a value (0.2175) that is in between the two inserted key values (0.224015748, 0.215354331). But the output of the code is:
lower_bound failed
upper_bound failed
equal_range failed
Did I misunderstand how the lower_bound, upper_bound and equal_range can be used in maps? Can we not find a "closest match" key using these methods? If these methods are not suitable, would you have any suggestion on what I could use for my requirement?
Thanks in advance for any help.
Thanks to #billy-oneal #dauphic for their comments and edits. I have updated the code above to make it compilable and runnable (once you include the correct headers of course).
Can we not find a "closest match" key using these methods?
No. hash_multimap is implemented using a hashtable. Two keys that are very close to each other (0.2153 and 0.2175, for example) will likely map to totally different bins in the hashtable.
A hashtable does not maintain its elements in a sorted order, so you cannot find the closest match to a given key without a linear search.
The lower_bound, upper_bound, and equal_range functions in hash_multimap have a somewhat odd implementation in the Visual C++ standard library extensions.
Consider the documentation for lower_bound:
The member function determines the first element X in the controlled sequence that hashes to the same bucket as key and has equivalent ordering to key. If no such element exists, it returns hash_map::end; otherwise it returns an iterator that designates X. You use it to locate the beginning of a sequence of elements currently in the controlled sequence that match a specified key.
And the documentation for upper_bound:
The member function determines the last element X in the controlled sequence that hashes to the same bucket as key and has equivalent ordering to key. If no such element exists, or if X is the last element in the controlled sequence, it returns hash_map::end; otherwise it returns an iterator that designates the first element beyond X. You use it to locate the end of a sequence of elements currently in the controlled sequence that match a specified key.
Essentially, these functions allow you to identify the range of elements that have a given key. Their behavior is not the same as the behavior of std::lower_bound or std::map::lower_bound (theirs is the behavior that you were expecting).
For what it's worth, the C++0x unordered associative containers do not provide lower_bound, upper_bound, or equal_range functions.
Would you have any suggestion on what I could use for my requirement?
Yes: if you need the behavior of lower_bound and upper_bound, use an ordered associative container like std::multimap.