How can natural numbers be represented to offer constant time addition? - haskell

Cirdec's answer to a largely unrelated question made me wonder how best to represent natural numbers with constant-time addition, subtraction by one, and testing for zero.
Why Peano arithmetic isn't good enough:
Suppose we use
data Nat = Z | S Nat
Then we can write
Z + n = n
S m + n = S(m+n)
We can calculate m+n in O(1) time by placing m-r debits (for some constant r), one on each S constructor added onto n. To get O(1) isZero, we need to be sure to have at most p debits per S constructor, for some constant p. This works great if we calculate a + (b + (c+...)), but it falls apart if we calculate ((...+b)+c)+d. The trouble is that the debits stack up on the front end.
One option
The easy way out is to just use catenable lists, such as the ones Okasaki describes, directly. There are two problems:
O(n) space is not really ideal.
It's not entirely clear (at least to me) that the complexity of bootstrapped queues is necessary when we don't care about order the way we would for lists.

As far as I know, Idris (a dependently-typed purely functional language which is very close to Haskell) deals with this in a quite straightforward way. Compiler is aware of Nats and Fins (upper-bounded Nats) and replaces them with machine integer types and operations whenever possible, so the resulting code is pretty effective. However, that's not true for custom types (even isomorphic ones) as well as for compilation stage (there were some code samples using Nats for type checking which resulted in exponential growth in compile-time, I can provide them if needed).
In case of Haskell, I think a similar compiler extension may be implemented. Another possibility is to make TH macros which would transform the code. Of course, both of options aren't easy.

My understanding is that in basic computer programming terminology the underlying problem is you want to concatenate lists in constant time. The lists don't have cheats like forward references, so you can't jump to the end in O(1) time, for example.
You can use rings instead, which you can merge in O(1) time, regardless if a+(b+(c+...)) or ((...+c)+b)+a logic is used. The nodes in the rings don't need to be doubly linked, just a link to the next node.
Subtraction is the removal of any node, O(1), and testing for zero (or one) is trivial. Testing for n > 1 is O(n), however.
If you want to reduce space, then at each operation you can merge the nodes at the insertion or deletion points and weight the remaining ones higher. The more operations you do, the more compact the representation becomes! I think the worst case will still be O(n), however.

We know that there are two "extremal" solutions for efficient addition of natural numbers:
Memory efficient, the standard binary representation of natural numbers that uses O(log n) memory and requires O(log n) time for addition. (See also Chapter "Binary Representations" in the Okasaki's book.)
CPU efficient which use just O(1) time. (See Chapter "Structural Abstraction" in the book.) However, the solution uses O(n) memory as we'd represent natural number n as a list of n copies of ().
I haven't done the actual calculations, but I believe for the O(1) numerical addition we won't need the full power of O(1) FIFO queues, it'd be enough to bootstrap standard list [] (LIFO) in the same way. If you're interested, I could try to elaborate on that.
The problem with the CPU efficient solution is that we need to add some redundancy to the memory representation so that we can spare enough CPU time. In some cases, adding such a redundancy can be accomplished without compromising the memory size (like for O(1) increment/decrement operation). And if we allow arbitrary tree shapes, like in the CPU efficient solution with bootstrapped lists, there are simply too many tree shapes to distinguish them in O(log n) memory.
So the question is: Can we find just the right amount of redundancy so that sub-linear amount of memory is enough and with which we could achieve O(1) addition? I believe the answer is no:
Let's have a representation+algorithm that has O(1) time addition. Let's then have a number of the magnitude of m-bits, which we compute as a sum of 2^k numbers, each of them of the magnitude of (m-k)-bit. To represent each of those summands we need (regardless of the representation) minimum of (m-k) bits of memory, so at the beginning, we start with (at least) (m-k) 2^k bits of memory. Now at each of those 2^k additions, we are allowed to preform a constant amount of operations, so we are able to process (and ideally remove) total of C 2^k bits. Therefore at the end, the lower bound for the number of bits we need to represent the outcome is (m-k-C) 2^k bits. Since k can be chosen arbitrarily, our adversary can set k=m-C-1, which means the total sum will be represented with at least 2^(m-C-1) = 2^m/2^(C+1) ∈ O(2^m) bits. So a natural number n will always need O(n) bits of memory!

Related

Understanding space leak with Arrow

I am having difficulty understanding the space leak in Hudak's paper "plugging a space leak with arrow"
https://www.sciencedirect.com/science/article/pii/S1571066107005919.
1) What exactly does O(n) space complexity mean? The total memory allocated with respect to input size? What about garbage collection along the way?
2) If the definition in 1) holds, how is it that in page 34, they say if dt is constant, the signal type is akin to list type and runs in constant space? Doesn't integralC still create 1 unit of space at each step, totally n units, that is, still O(n)?
3) I totally do not understand why time complexity is O(n^2). I do have an inkling of what needs to be evaluated (i', i'', i''' in picture below), but how is that O(n^2)?
The image represents the evaluation steps I have drawn in lambda graph notation. Each step sees its structure ADDED to the overall scope rather than REPLACING whatever is in it. Square denotes pointer, so square(i') in step 2 denotes i' block in step 1 for example.
I have only glanced at the paper briefly, but will do my best.
As usual, space complexity means than at some point in time we need to be storing that much "stuff" simultaneously in memory. GC says we can recover memory from variables we no longer need, but here we need to be remembering O(n) stuff, the memory can't be recovered yet because we (may) still need access to any part of it. You can think of it as, reusing memory (via eg. GC) adds to time but not space complexity. Here, n is computing the nth value by providing n time steps (dts).
If dt is constant, then instead of the type of C a = (a, dt -> C a) we have C' a = (a, C' a) which is just a (nonempty) list. The point of the paper is that either type can be made to run in constant space, but if it were isomorphic to lists then that's a solved problem. To see why creating a new value at each step can be constant memory, consider a possible evaluation of (iterate f)!!n, where we store just x, then overwrite it with f (x), then overwrite it with f (f (x)), and so on until we have f^n(x), but only ever using this one cell of memory for our values (and technically a second cell to iterate up to n).
Let's consider a really simple example of evaluation giving these different complexities. Let's say we're generating a list from some seed where each item is the sum of all the previous items. To calculate each next item, we could hold the entirety of the initial part p of the list in memory (O(Len(p))) and sum it (O(Len(p))), resulting in total memory O(n) and run time O(n^2) to retrieve then`th element - or we could observe that this is in fact the same as doubling the previous item, allowing us to use constant memory and linear time. I think the analogy section given is quite helpful - can you mechanically write out successors for the first few values and see how the two different evaluation strategies rapidly diverge in steps needed?

Python: Time and space complexity of gcd and recursive iterations

I’m studying for mid-terms and this is one of the questions from a past yr paper in university. (Questions stated below)
Given Euclid’s algorithm, we can write the function gcd.
def gcd(a,b):
if b == 0:
return a
else:
return gcd(b, a%b)
[Reduced Proper Fraction]
Consider the fraction, n/d , where n and d are positive integers.
If n < d and GCD(n,d) = 1, it is called a reduced proper fraction.
If we list the set of reduced proper fractions for n <=8 in ascending order of size, we get:
1/8,1/7,1/6,1/5,1/4,2/7,1/3,3/8,2/5,3/7,1/2,4/7,3/5,5/8,2/3,5/7,3/4,4/5,5/6,6/7,7/8
It can be seen that there are 21 elements in this set.
Implement the function count_fraction that takes an integer n and returns the number of reduced proper fractions for n. Assuming that the order of growth (in time) for gcd is O(logn), what is the order of growth in terms of time and space for the function you wrote in Part (B) in terms of n. Explain your answer.
Suggested answer.
def count_fraction(n):
if n==1:
return 0
else:
new = 0
for i in range(1,n):
if gcd(i,n) == 1:
new += 1
return new + count_fraction(n-1)
The suggested answer is pretty strange as the trend of this question in previous years, is designed to test purely recursive/purely iterative solutions, but it gave a mix. Nevertheless, I don’t understand why the suggested order of growth is given as such. (I will write it in the format, suggested answer, my answer and questions on my fundamentals)
Time: O(nlogn), since it’s roughly log1+log2+· · ·+log(n−1)+logn
My time: O(n^2 log n). Since there is n recursive function calls, each call has n-1 iterations, which takes O(log n) time due to gcd.
Question 1: Time in my opinion is counting number of iterations/recursions* time taken for 1 iteration/recursion. It’s actually my first time interacting with a mixed iterative/recursive solution so I don’t really know the interaction. Can someone tell me whether I'm right/wrong?
Space: O(n), since gcd is O(1) and this code is obviously linear recursion.
My space: O(n*log n). Since gcd is O(log n) and this code takes up O(n) space.
Question 2: Space in my opinion is counting number of recursions*space taken for 1 recursive call OR largest amount of space required among all iterations. In the first place, I would think gcd is O(log n) as I assume that recursion will happen log n times. I want to ask whether the discrepancy is due to what my lecturer said.
(I don’t really understand what my lecturers says about delayed operations for recursions on factorial or no new objects being formed in iteratives. How do u then accept the fact that there are NEW objects formed in recursion also no delayed operations in iteration).
If u can clarify my doubt on why gcd is O(1) instead of O(log n), I think if I take n*1 for recursion case, I would agree with the answer.
I agree with your analysis for of the running time. It should be O(n^2 log(n)), since you make n calls to gcd on each recursive call to count_fraction.
You're also partly right about the second question, but you get the conclusion wrong (and the supplied answer gets the right conclusion for the wrong reasons). The gcd function does indeed use O(log(n)) space, for the stack of the recursive calls. However, that space gets reused for each later call to gcd from count_fraction, so there's only ever one stack of size log(n). So there's no reason to multiply the log(n) by anything, only add it to whatever else might be using memory when the gcd calls are happening. Since there will also be a stack of size O(n) for the recursive calls of count_fraction, the smaller log(n) term can be dropped, so you say it takes O(n) space rather than O(n + log(n)).
All in all, I'd say this is a really bad assignment to be trying to learn from. Almost everything in it has an error somewhere, from the description saying it's limiting n when it's really limiting d, to the answers you describe which are all at least partly wrong.

Time Complexity for index and drop of first item in Data.Sequence

I was recently working on an implementation of calculating moving average from a stream of input, using Data.Sequence. I figured I could get the whole operation to be O(n) by using a deque.
My first attempt was (in my opinion) a bit more straightforward to read, but not a true a deque. It looked like:
let newsequence = (|>) sequence n
...
let dropFrontTotal = fromIntegral (newtotal - index newsequence 0)
let newsequence' = drop 1 newsequence.
...
According to the hackage docs for Data.Sequence, index should take O(log(min(i,n-i))) while drop should also take O(log(min(i,n-i))).
Here's my question:
If I do drop 1 someSequence, doesn't this mean a time complexity of O(log(min(1, (length someSequence)))), which in this case means: O(log(1))?
If so, isn't O(log(1)) effectively constant?
I had the same question for index someSequence 0: shouldn't that operation end up being O(log(0))?
Ultimately, I had enough doubts about my understanding that I resorted to using Criterion to benchmark the two implementations to prove that the index/drop version is slower (and the amount it's slower by grows with the input). The informal results on my machine can be seen at the linked gist.
I still don't really understand how to calculate time complexity for these operations, though, and I would appreciate any clarification anyone can provide.
What you suggest looks correct to me.
As a minor caveat remember that these are amortized complexity bounds, so a single operation could require more than constant time, but a long chain of operations will only require a constant times the number of the chain.
If you use criterion to benchmark and "reset" the state at every computation, you might see non-constant time costs, because the "reset" is preventing the amortization. It really depends on how you perform the test. If you start from a sequence an perform a long chain of operations on that, it should be OK. If you repeat many times a single operation using the same operands, then it could be not OK.
Further, I guess bounds such as O(log(...)) should actually be read as O(log(1 + ...)) -- you can't realistically have O(log(1)) = O(0) or, worse O(log(0))= O(-inf) as a complexity bound.

Most efficient way to print an AVL tree of strings?

I'm thinking that an in order traversal will run in O(n) time. The only thing better than that would be to have something running in logn time. But I don't see how this could be, considering we have to run at least n times.
Is O(n) the lastest we could do here?
Converting and expanding #C.B.'s comment to an answer:
If you have an AVL tree with n strings in it and you want to print all of them, then you have to do at least Θ(n) total work simply because you have to print out each of the n strings. You can often lower-bound the amount of work required to produce a list or otherwise output a sequence of values simply by counting up how many items are going to be in the list.
We can be even more precise here. Suppose the combined length of all the strings in the tree is L. The time required to print out all the strings in the tree has to be at least Θ(L), since it costs some computational effort to output each individual character. Therefore, we can say that we have to do at least Θ(n + L) work to print out all the strings in the tree.
The bound given here just says that any correct algorithm has to do at least this much work, not that there actually is an algorithm that does this much work. But if you look closely at any of the major tree traversals - inorder, preorder, postorder, level-order - you'll find that they all match this time bound.
Now, one area where you can look for savings is in space complexity. A level-order traversal of the tree might require Ω(n) total space if the tree is perfectly balanced (since it holds a whole layer of the tree in memory and the bottommost layer can have Θ(n) nodes in it), while an inorder, preorder, or postorder traversal would only require O(log n) memory because you only need to store the current access path, which has logarithmic height in an AVL tree.

What's Up with O(1)?

I have been noticing some very strange usage of O(1) in discussion of algorithms involving hashing and types of search, often in the context of using a dictionary type provided by the language system, or using dictionary or hash-array types used using array-index notation.
Basically, O(1) means bounded by a constant time and (typically) fixed space. Some pretty fundamental operations are O(1), although using intermediate languages and special VMs tends to distort ones thinking here (e.g., how does one amortize the garbage collector and other dynamic processes over what would otherwise be O(1) activities).
But ignoring amortization of latencies, garbage-collection, and so on, I still don't understand how the leap to assumption that certain techniques that involve some kind of searching can be O(1) except under very special conditions.
Although I have noticed this before, an example just showed up in the Pandincus question, "'Proper’ collection to use to obtain items in O(1) time in C# .NET?".
As I remarked there, the only collection I know of that provides O(1) access as a guaranteed bound is a fixed-bound array with an integer index value. The presumption is that the array is implemented by some mapping to random access memory that uses O(1) operations to locate the cell having that index.
For collections that involve some sort of searching to determine the location of a matching cell for a different kind of index (or for a sparse array with integer index), life is not so easy. In particular, if there are collisons and congestion is possible, access is not exactly O(1). And if the collection is flexible, one must recognize and amortize the cost of expanding the underlying structure (such as a tree or a hash table) for which congestion relief (e.g., high collision incidence or tree imbalance).
I would never have thought to speak of these flexible and dynamic structures as O(1). Yet I see them offered up as O(1) solutions without any identification of the conditions that must be maintained to actually have O(1) access be assured (as well as have that constant be negligibly small).
THE QUESTION: All of this preparation is really for a question. What is the casualness around O(1) and why is it accepted so blindly? Is it recognized that even O(1) can be undesirably large, even though near-constant? Or is O(1) simply the appropriation of a computational-complexity notion to informal use? I'm puzzled.
UPDATE: The Answers and comments point out where I was casual about defining O(1) myself, and I have repaired that. I am still looking for good answers, and some of the comment threads are rather more interesting than their answers, in a few cases.
The problem is that people are really sloppy with terminology. There are 3 important but distinct classes here:
O(1) worst-case
This is simple - all operations take no more than a constant amount of time in the worst case, and therefore in all cases. Accessing an element of an array is O(1) worst-case.
O(1) amortized worst-case
Amortized means that not every operation is O(1) in the worst case, but for any sequence of N operations, the total cost of the sequence is no O(N) in the worst case. This means that even though we can't bound the cost of any single operation by a constant, there will always be enough "quick" operations to make up for the "slow" operations such that the running time of the sequence of operations is linear in the number of operations.
For example, the standard Dynamic Array which doubles its capacity when it fills up requires O(1) amortized time to insert an element at the end, even though some insertions require O(N) time - there are always enough O(1) insertions that inserting N items always takes O(N) time total.
O(1) average-case
This one is the trickiest. There are two possible definitions of average-case: one for randomized algorithms with fixed inputs, and one for deterministic algorithms with randomized inputs.
For randomized algorithms with fixed inputs, we can calculate the average-case running time for any given input by analyzing the algorithm and determining the probability distribution of all possible running times and taking the average over that distribution (depending on the algorithm, this may or may not be possible due to the Halting Problem).
In the other case, we need a probability distribution over the inputs. For example, if we were to measure a sorting algorithm, one such probability distribution would be the distribution that has all N! possible permutations of the input equally likely. Then, the average-case running time is the average running time over all possible inputs, weighted by the probability of each input.
Since the subject of this question is hash tables, which are deterministic, I'm going to focus on the second definition of average-case. Now, we can't always determine the probability distribution of the inputs because, well, we could be hashing just about anything, and those items could be coming from a user typing them in or from a file system. Therefore, when talking about hash tables, most people just assume that the inputs are well-behaved and the hash function is well behaved such that the hash value of any input is essentially randomly distributed uniformly over the range of possible hash values.
Take a moment and let that last point sink in - the O(1) average-case performance for hash tables comes from assuming all hash values are uniformly distributed. If this assumption is violated (which it usually isn't, but it certainly can and does happen), the running time is no longer O(1) on average.
See also Denial of Service by Algorithmic Complexity. In this paper, the authors discuss how they exploited some weaknesses in the default hash functions used by two versions of Perl to generate large numbers of strings with hash collisions. Armed with this list of strings, they generated a denial-of-service attack on some webservers by feeding them these strings that resulted in the worst-case O(N) behavior in the hash tables used by the webservers.
My understanding is that O(1) is not necessarily constant; rather, it is not dependent on the variables under consideration. Thus a hash lookup can be said to be O(1) with respect to the number of elements in the hash, but not with respect to the length of the data being hashed or ratio of elements to buckets in the hash.
The other element of confusion is that big O notation describes limiting behavior. Thus, a function f(N) for small values of N may indeed show great variation, but you would still be correct to say it is O(1) if the limit as N approaches infinity is constant with respect to N.
O(1) means constant time and (typically) fixed space
Just to clarify these are two separate statements. You can have O(1) in time but O(n) in space or whatever.
Is it recognized that even O(1) can be undesirably large, even though near-constant?
O(1) can be impractically HUGE and it's still O(1). It is often neglected that if you know you'll have a very small data set the constant is more important than the complexity, and for reasonably small data sets, it's a balance of the two. An O(n!) algorithm can out-perform a O(1) if the constants and sizes of the data sets are of the appropriate scale.
O() notation is a measure of the complexity - not the time an algorithm will take, or a pure measure of how "good" a given algorithm is for a given purpose.
I can see what you're saying, but I think there are a couple of basic assumptions underlying the claim that look-ups in a Hash table have a complexity of O(1).
The hash function is reasonably designed to avoid a large number of collisions.
The set of keys is pretty much randomly distributed, or at least not purposely designed to make the hash function perform poorly.
The worst case complexity of a Hash table look-up is O(n), but that's extremely unlikely given the above 2 assumptions.
Hashtables is a data structure that supports O(1) search and insertion.
A hashtable usually has a key and value pair, where the key is used to as the parameter to a function (a hash function) which will determine the location of the value in its internal data structure, usually an array.
As insertion and search only depends upon the result of the hash function and not on the size of the hashtable nor the number of elements stored, a hashtable has O(1) insertion and search.
There is one caveat, however. That is, as the hashtable becomes more and more full, there will be hash collisions where the hash function will return an element of an array which is already occupied. This will necesitate a collision resolution in order to find another empty element.
When a hash collision occurs, a search or insertion cannot be performed in O(1) time. However, good collision resolution algorithms can reduce the number of tries to find another suiteable empty spot or increasing the hashtable size can reduce the number of collisions in the first place.
So, in theory, only a hashtable backed by an array with an infinite number of elements and a perfect hash function would be able to achieve O(1) performance, as that is the only way to avoid hash collisions that drive up the number of required operations. Therefore, for any finite-sized array will at one time or another be less than O(1) due to hash collisions.
Let's take a look at an example. Let's use a hashtable to store the following (key, value) pairs:
(Name, Bob)
(Occupation, Student)
(Location, Earth)
We will implement the hashtable back-end with an array of 100 elements.
The key will be used to determine an element of the array to store the (key, value) pair. In order to determine the element, the hash_function will be used:
hash_function("Name") returns 18
hash_function("Occupation") returns 32
hash_function("Location") returns 74.
From the above result, we'll assign the (key, value) pairs into the elements of the array.
array[18] = ("Name", "Bob")
array[32] = ("Occupation", "Student")
array[74] = ("Location", "Earth")
The insertion only requires the use of a hash function, and does not depend on the size of the hashtable nor its elements, so it can be performed in O(1) time.
Similarly, searching for an element uses the hash function.
If we want to look up the key "Name", we'll perform a hash_function("Name") to find out which element in the array the desired value resides.
Also, searching does not depend on the size of the hashtable nor the number of elements stored, therefore an O(1) operation.
All is well. Let's try to add an additional entry of ("Pet", "Dog"). However, there is a problem, as hash_function("Pet") returns 18, which is the same hash for the "Name" key.
Therefore, we'll need to resolve this hash collision. Let's suppose that the hash collision resolving function we used found that the new empty element is 29:
array[29] = ("Pet", "Dog")
Since there was a hash collision in this insertion, our performance was not quite O(1).
This problem will also crop up when we try to search for the "Pet" key, as trying to find the element containing the "Pet" key by performing hash_function("Pet") will always return 18 initially.
Once we look up element 18, we'll find the key "Name" rather than "Pet". When we find this inconsistency, we'll need to resolve the collision in order to retrieve the correct element which contains the actual "Pet" key. Resovling a hash collision is an additional operation which makes the hashtable not perform at O(1) time.
I can't speak to the other discussions you've seen, but there is at least one hashing algorithm that is guaranteed to be O(1).
Cuckoo hashing maintains an invariant so that there is no chaining in the hash table. Insertion is amortized O(1), retrieval is always O(1). I've never seen an implementation of it, it's something that was newly discovered when I was in college. For relatively static data sets, it should be a very good O(1), since it calculates two hash functions, performs two lookups, and immediately knows the answer.
Mind you, this is assuming the hash calcuation is O(1) as well. You could argue that for length-K strings, any hash is minimally O(K). In reality, you can bound K pretty easily, say K < 1000. O(K) ~= O(1) for K < 1000.
There may be a conceptual error as to how you're understanding Big-Oh notation. What it means is that, given an algorithm and an input data set, the upper bound for the algorithm's run time depends on the value of the O-function when the size of the data set tends to infinity.
When one says that an algorithm takes O(n) time, it means that the runtime for an algorithm's worst case depends linearly on the size of the input set.
When an algorithm takes O(1) time, the only thing it means is that, given a function T(f) which calculates the runtime of a function f(n), there exists a natural positive number k such that T(f) < k for any input n. Essentially, it means that the upper bound for the run time of an algorithm is not dependent on its size, and has a fixed, finite limit.
Now, that does not mean in any way that the limit is small, just that it's independent of the size of the input set. So if I artificially define a bound k for the size of a data set, then its complexity will be O(k) == O(1).
For example, searching for an instance of a value on a linked list is an O(n) operation. But if I say that a list has at most 8 elements, then O(n) becomes O(8) becomes O(1).
In this case, it we used a trie data structure as a dictionary (a tree of characters, where the leaf node contains the value for the string used as key), if the key is bounded, then its lookup time can be considered O(1) (If I define a character field as having at most k characters in length, which can be a reasonable assumption for many cases).
For a hash table, as long as you assume that the hashing function is good (randomly distributed) and sufficiently sparse so as to minimize collisions, and rehashing is performed when the data structure is sufficiently dense, you can indeed consider it an O(1) access-time structure.
In conclusion, O(1) time may be overrated for a lot of things. For large data structures the complexity of an adequate hash function may not be trivial, and sufficient corner cases exist where the amount of collisions lead it to behave like an O(n) data structure, and rehashing may become prohibitively expensive. In which case, an O(log(n)) structure like an AVL or a B-tree may be a superior alternative.
In general, I think people use them comparatively without regard to exactness. For example, hash-based data structures are O(1) (average) look up if designed well and you have a good hash. If everything hashes to a single bucket, then it's O(n). Generally, though one uses a good algorithm and the keys are reasonably distributed so it's convenient to talk about it as O(1) without all the qualifications. Likewise with lists, trees, etc. We have in mind certain implementations and it's simply more convenient to talk about them, when discussing generalities, without the qualifications. If, on the other hand, we're discussing specific implementations, then it probably pays to be more precise.
HashTable looks-ups are O(1) with respect to the number of items in the table, because no matter how many items you add to the list the cost of hashing a single item is pretty much the same, and creating the hash will tell you the address of the item.
To answer why this is relevant: the OP asked about why O(1) seemed to be thrown around so casually when in his mind it obviously could not apply in many circumstances. This answer explains that O(1) time really is possible in those circumstances.
Hash table implementations are in practice not "exactly" O(1) in use, if you test one you'll find they average around 1.5 lookups to find a given key across a large dataset
( due to to the fact that collisions DO occur, and upon colliding, a different location must be assigned )
Also, In practice, HashMaps are backed by arrays with an initial size, that is "grown" to double size when it reaches 70% fullness on average, which gives a relatively good addressing space. After 70% fullness collision rates grow faster.
Big O theory states that if you have a O(1) algorithm, or even an O(2) algorithm, the critical factor is the degree of the relation between input-set size and steps to insert/fetch one of them. O(2) is still constant time, so we just approximate it as O(1), because it means more or less the same thing.
In reality, there is only 1 way to have a "perfect hashtable" with O(1), and that requires:
A Global Perfect Hash Key Generator
An Unbounded addressing space.
( Exception case: if you can compute in advance all the permutations of permitted keys for the system, and your target backing store address space is defined to be the size where it can hold all keys that are permitted, then you can have a perfect hash, but its a "domain limited" perfection )
Given a fixed memory allocation, it is not plausible in the least to have this, because it would assume that you have some magical way to pack an infinite amount of data into a fixed amount of space with no loss of data, and that's logistically impossible.
So retrospectively, getting O(1.5) which is still constant time, in a finite amount of memory with even a relatively Naïve hash key generator, I consider pretty damn awesome.
Suffixory note Note I use O(1.5) and O(2) here. These actually don't exist in big-o. These are merely what people whom don't know big-o assume is the rationale.
If something takes 1.5 steps to find a key, or 2 steps to find that key, or 1 steps to find that key, but the number of steps never exceeds 2 and whether it takes 1 step or 2 is completely random, then it is still Big-O of O(1). This is because no matter how many items to you add to the dataset size, It still maintains the <2 steps. If for all tables > 500 keys it takes 2 steps, then you can assume those 2 steps are in fact one-step with 2 parts, ... which is still O(1).
If you can't make this assumption, then your not being Big-O thinking at all, because then you must use the number which represents the number of finite computational steps required to do everything and "one-step" is meaningless to you. Just get into your head that there is NO direct correlation between Big-O and number of execution cycles involved.
O(1) means, exactly, that the algorithm's time complexity is bounded by a fixed value. This doesn't mean it's constant, only that it is bounded regardless of input values. Strictly speaking, many allegedly O(1) time algorithms are not actually O(1) and just go so slowly that they are bounded for all practical input values.
Yes, garbage collection does affect the asymptotic complexity of algorithms running in the garbage collected arena. It is not without cost, but it is very hard to analyze without empirical methods, because the interaction costs are not compositional.
The time spent garbage collecting depends on the algorithm being used. Typically modern garbage collectors toggle modes as memory fills up to keep these costs under control. For instance, a common approach is to use a Cheney style copy collector when memory pressure is low because it pays cost proportional to the size of the live set in exchange for using more space, and to switch to a mark and sweep collector when memory pressure becomes greater, because even though it pays cost proportional to the live set for marking and to the whole heap or dead set for sweeping. By the time you add card-marking and other optimizations, etc. the worst case costs for a practical garbage collector may actually be a fair bit worse, picking up an extra logarithmic factor for some usage patterns.
So, if you allocate a big hash table, even if you access it using O(1) searches for all time during its lifetime, if you do so in a garbage collected environment, occasionally the garbage collector will traverse the entire array, because it is size O(n) and you will pay that cost periodically during collection.
The reason we usually leave it off of the complexity analysis of algorithms is that garbage collection interacts with your algorithm in non-trivial ways. How bad of a cost it is depends a lot on what else you are doing in the same process, so the analysis is not compositional.
Moreover, above and beyond the copy vs. compact vs. mark and sweep issue, the implementation details can drastically affect the resulting complexities:
Incremental garbage collectors that track dirty bits, etc. can all but make those larger re-traversals disappear.
It depends on whether your GC works periodically based on wall-clock time or runs proportional to the number of allocations.
Whether a mark and sweep style algorithm is concurrent or stop-the-world
Whether it marks fresh allocations black if it leaves them white until it drops them into a black container.
Whether your language admits modifications of pointers can let some garbage collectors work in a single pass.
Finally, when discussing an algorithm, we are discussing a straw man. The asymptotics will never fully incorporate all of the variables of your environment. Rarely do you ever implement every detail of a data structure as designed. You borrow a feature here and there, you drop a hash table in because you need fast unordered key access, you use a union-find over disjoint sets with path compression and union by rank to merge memory-regions over there because you can't afford to pay a cost proportional to the size of the regions when you merge them or what have you. These structures are thought primitives and the asymptotics help you when planning overall performance characteristics for the structure 'in-the-large' but knowledge of what the constants are matters too.
You can implement that hash table with perfectly O(1) asymptotic characteristics, just don't use garbage collection; map it into memory from a file and manage it yourself. You probably won't like the constants involved though.
I think when many people throw around the term "O(1)" they implicitly have in mind a "small" constant, whatever "small" means in their context.
You have to take all this big-O analysis with context and common sense. It can be an extremely useful tool or it can be ridiculous, depending on how you use it.

Resources