Which mechanism does it use form the following?
https://en.wikipedia.org/wiki/Hash_table#Collision_resolution
Open addressing with quadratic probing (reference: source code).
Note 1: Not everything that acts like an associative array is actually implemented as a hash table under the hood. In particular, small/dense arrays like [3, 1, 4, 1.5] are backed by an actual array (similar to a C array) for fast index-based access.
Note 2: The answer to this question may or may not change over time if/when the team experiments with alternative implementations. For example, open addressing requires relatively low load factors in order to provide quick accesses; it would be interesting to find an implementation that's more memory efficient (without being slower).
Related
I was surprised to discover recently that while dicts are guaranteed to preserve insertion order in Python 3.7+, sets are not:
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> d
{'a': 1, 'b': 2, 'c': 3}
>>> d['d'] = 4
>>> d
{'a': 1, 'b': 2, 'c': 3, 'd': 4}
>>> s = {'a', 'b', 'c'}
>>> s
{'b', 'a', 'c'}
>>> s.add('d')
>>> s
{'d', 'b', 'a', 'c'}
What is the rationale for this difference? Do the same efficiency improvements that led the Python team to change the dict implementation not apply to sets as well?
I'm not looking for pointers to ordered-set implementations or ways to use dicts as stand-ins for sets. I'm just wondering why the Python team didn't make built-in sets preserve order at the same time they did so for dicts.
Sets and dicts are optimized for different use-cases. The primary use of a set is fast membership testing, which is order agnostic. For dicts, cost of the lookup is the most critical operation, and the key is more likely to be present. With sets, the presence or absence of an element is not known in advance, and so the set implementation needs to optimize for both the found and not-found case. Also, some optimizations for common set operations such as union and intersection make it difficult to retain set ordering without degrading performance.
While both data structures are hash based, it's a common misconception that sets are just implemented as dicts with null values. Even before the compact dict implementation in CPython 3.6, the set and dict implementations already differed significantly, with little code reuse. For example, dicts use randomized probing, but sets use a combination of linear probing and open addressing, to improve cache locality. The initial linear probe (default 9 steps in CPython) will check a series of adjacent key/hash pairs, improving performance by reducing the cost of hash collision handling - consecutive memory access is cheaper than scattered probes.
dictobject.c - master, v3.5.9
setobject.c - master, v3.5.9
issue18771 - changeset to reduce the cost of hash collisions for set objects in Python 3.4.
It would be possible in theory to change CPython's set implementation to be similar to the compact dict, but in practice there are drawbacks, and notable core developers were opposed to making such a change.
Sets remain unordered. (Why? The usage patterns are different. Also, different implementation.)
– Guido van Rossum
Sets use a different algorithm that isn't as amendable to retaining insertion order.
Set-to-set operations lose their flexibility and optimizations if order is required. Set mathematics are defined in terms of unordered sets. In short, set ordering isn't in the immediate future.
– Raymond Hettinger
A detailed discussion about whether to compactify sets for 3.7, and why it was decided against, can be found in the python-dev mailing lists.
In summary, the main points are: different usage patterns (insertion ordering dicts such as **kwargs is useful, less so for sets), space savings for compacting sets are less significant (because there are only key + hash arrays to densify, as opposed to key + hash + value arrays), and the aforementioned linear probing optimization which sets currently use is incompatible with a compact implementation.
I will reproduce Raymond's post below which covers the most important points.
On Sep 14, 2016, at 3:50 PM, Eric Snow wrote:
Then, I'll do same to sets.
Unless I've misunderstood, Raymond was opposed to making a similar
change to set.
That's right. Here are a few thoughts on the subject before people
starting running wild.
For the compact dict, the space savings was a net win with the additional space consumed by the indices and the overallocation for
the key/value/hash arrays being more than offset by the improved
density of key/value/hash arrays. However for sets, the net was much
less favorable because we still need the indices and overallocation
but can only offset the space cost by densifying only two of the three
arrays. In other words, compacting makes more sense when you have
wasted space for keys, values, and hashes. If you lose one of those
three, it stops being compelling.
The use pattern for sets is different from dicts. The former has more hit or miss lookups. The latter tends to have fewer missing key
lookups. Also, some of the optimizations for the set-to-set operations
make it difficult to retain set ordering without impacting
performance.
I pursued alternative path to improve set performance. Instead of compacting (which wasn't much of space win and incurred the cost of an
additional indirection), I added linear probing to reduce the cost of
collisions and improve cache performance. This improvement is
incompatible with the compacting approach I advocated for
dictionaries.
For now, the ordering side-effect on dictionaries is non-guaranteed, so it is premature to start insisting the sets become ordered as well.
The docs already link to a recipe for creating an OrderedSet (
https://code.activestate.com/recipes/576694/ ) but it seems like the
uptake has been nearly zero. Also, now that Eric Snow has given us a
fast OrderedDict, it is easier than ever to build an OrderedSet from
MutableSet and OrderedDict, but again I haven't observed any real
interest because typical set-to-set data analytics don't really need
or care about ordering. Likewise, the primary use of fast membership
testings is order agnostic.
That said, I do think there is room to add alternative set implementations to PyPI. In particular, there are some interesting
special cases for orderable data where set-to-set operations can be
sped-up by comparing entire ranges of keys (see
https://code.activestate.com/recipes/230113-implementation-of-sets-using-sorted-lists
for a starting point). IIRC, PyPI already has code for set-like bloom
filters and cuckoo hashing.
I understanding that it is exciting to have a major block of code accepted into the Python core but that shouldn't open to floodgates to
engaging in more major rewrites of other datatypes unless we're sure
that it is warranted.
– Raymond Hettinger
From [Python-Dev] Python 3.6 dict becomes compact and gets a private version; and keywords become ordered, Sept 2016.
Discussions
Your question is germane and has already been heavily discussed on python-devs. R. Hettinger shared a list of rationales in that thread. The state of the issue appeared open-ended, shortly after this detailed reply from T. Peters. Some time later (c. 2022), the discussion reignited elsewhere on python-ideas.
In short, the implementation of modern dicts that preserves insertion order is unique and not considered appropriate with sets. In particular, dicts are used everywhere to run Python (e.g. __dict__ in namespaces of objects). A major motivation behind the modern dict was to reduce size, making Python more memory-efficient overall. In contrast, sets are less prevalent than dicts within Python's core and thus dissuade such a refactoring. See also R. Hettinger's talk on the modern dict implementation.
Perspectives
The unordered nature of sets in Python parallels the behavior of mathematical sets. Order is not guaranteed.
The corresponding mathematical concept is unordered and it would be weird to impose
such as order - R. Hettinger
If order of any kind were introduced to sets in Python, then this behavior would comply with a completely separate mathematical structure, namely an ordered set (or Oset). Osets play a separate roll in mathematics, particularly in combinatorics. One practical application of Osets is observed in changing of bells.
Having unordered sets are consistent with a very generic and ubiquitous data structure that unpins most modern math, i.e. Set Theory. I submit, unordered sets in Python are good to have.
See also related posts that expand on this topic:
Converting a list to a set changes element order
Get unique values from a list in python
Does Python have an ordered set
Classic CaS examples deal with simple data structures where critical change is to one primitive variable. However, even linked list, for example, requires change in multiple data items. For example: head, tail, next and prev. How to deal with it?
I have in mind some "capturing" 4 pointers in 4-long memory block and putting the block into CAS routine. Is it technically available and good practice?
I am looking for a Haskell data structure that stores an ordered list of elements and that is time-efficient at swapping pairs of elements at arbitrary locations within the list. It's not [a], obviously. It's not Vector because swapping creates new vectors. Which data structure is efficient at this?
The most efficient implementations of persistent data structures, which exhibit O(1) updates (as well as appending, prepending, counting and slicing), are based on the Array Mapped Trie algorithm. The Vector data-structures of Clojure and Scala are based on it, for instance. The only Haskell implementation of that data-structure that I know of is presented by the "persistent-vector" package.
This algorithm is very young, it was only first presented in the year 2000, which might be the reason why not so many people have ever heard about it. But the thing turned out to be such a universal solution that it got adapted for Hash-tables soon after. The adapted version of this algorithm is called Hash Array Mapped Trie. It is as well used in Clojure and Scala to implement the Set and Map data-structures. It is also more ubiquitous in Haskell with packages like "unordered-containers" and "stm-containers" revolving around it.
To learn more about the algorithm I recommend the following links:
http://blog.higher-order.net/2009/02/01/understanding-clojures-persistentvector-implementation.html
http://lampwww.epfl.ch/papers/idealhashtrees.pdf
Data.Sequence from the containers package would likely be a not-terrible data structure to start with for this use case.
Haskell is a (nearly) pure functional language, so any data structure you update will need to make a new copy of the structure, and re-using the data elements is close to the best you can do. Also, the new list would be lazily evaluated and typically only the spine would need to be created until you need the data. If the number of updates is small compared to the number of elements, you could make a difference list that checks a sparse set of updates first, and only then looks in the original vector.
I have to pick a type for a sequence of floats with 16K elements. The values will be updated frequently, potentially many times a second.
I've read the wiki page on arrays. Here are the conclusions I've drawn so far. (Please correct me if any of them are mistaken.)
IArrays would be unacceptably slow in this case, because they'd be copied on every change. With 16K floats in the array, that's 64KB of memory copied each time.
IOArrays could do the trick, as they can be modified without copying all the data. In my particular use case, doing all updates in the IO monad isn't a problem at all. But they're boxed, which means extra overhead, and that could add up with 16K elements.
IOUArrays seem like the perfect fit. Like IOArrays, they don't require a full copy on each change. But unlike IOArrays, they're unboxed, meaning they're basically the Haskell equivalent of a C array of floats. I realize they're strict. But I don't see that being an issue, because my application would never need to access anything less than the entire array.
Am I right to look to IOUArrays for this?
Also, suppose I later want to read or write the array from multiple threads. Will I have backed myself into a corner with IOUArrays? Or is the choice of IOUArrays totally orthogonal to the problem of concurrency? (I'm not yet familiar with the concurrency primitives in Haskell and how they interact with the IO monad.)
A good rule of thumb is that you should almost always use the vector library instead of arrays. In this case, you can use mutable vectors from the Data.Vector.Mutable module.
The key operations you'll want are read and write which let you mutably read from and write to the mutable vector.
You'll want to benchmark of course (with criterion) or you might be interested in browsing some benchmarks I did e.g. here (if that link works for you; broken for me).
The vector library is a nice interface (crazy understatement) over GHC's more primitive array types which you can get to more directly in the primitive package. As are the things in the standard array package; for instance an IOUArray is essentially a MutableByteArray#.
Unboxed mutable arrays are usually going to be the fastest, but you should compare them in your application to IOArray or the vector equivalent.
My advice would be:
if you probably don't need concurrency first try a mutable unboxed Vector as Gabriel suggests
if you know you will want concurrent updates (and feel a little brave) then first try a MutableArray and then do atomic updates with these functions from the atomic-primops library. If you want fine-grained locking, this is your best choice. Of course concurrent reads will work fine on whatever array you choose.
It should also be theoretically possible to do concurrent updates on a MutableByteArray (equivalent to IOUArray) with those atomic-primops functions too, since a Float should always fit into a word (I think), but you'd have to do some research (or bug Ryan).
Also be aware of potential memory reordering issues when doing concurrency with the atomic-primops stuff, and help convince yourself with lots of tests; this is somewhat uncharted territory.
I'm looking for a functional data structure that represents finite bijections between two types, that is space-efficient and time-efficient.
For instance, I'd be happy if, considering a bijection f of size n:
extending f with a new pair of elements has complexity O(ln n)
querying f(x) or f^-1(x) has complexity O(ln n)
the internal representation for f is more space efficient than having 2 finite maps (representing f and its inverse)
I am aware of efficient representation of permutations, like this paper, but it does not seem to solve my problem.
Please have a look at my answer for a relatively similar question. The provided code can handle general NxM relations, but also be specialized to just bijections (just as you would for a binary search tree).
Pasting the answer here for completeness:
The simplest way is to use a pair of unidirectional maps. It has some cost, but you won't get much better (you could get a bit better using dedicated binary trees, but you have a huge complexity cost to pay if you have to implement it yourself). In essence, lookups will be just as fast, but addition and deletion will be twice as slow. Which isn't so bad for a logarithmic operation. Another advantage of this technique is that you can use specialized maps types for the key or value type if you have one available. You won't get as much flexibility with a specific generalist data structure.
A different solution is to use a quadtree (instead of considering a NxN relation as a pair of 1xN and Nx1 relations, you see it as a set of elements in the cartesian product (Key*Value) of your types, that is, a spatial plane), but it's not clear to me that the time and memory costs are better than with two maps. I suppose it needs to be tested.
Although it doesn't satisfy your third requirement, bimaps seem like the way to go. (They just make two finite maps, one in each direction, convenient to use.)