I've searched a bit and found a related post: Get median from AVL tree?
but I'm not too satisfied with the response.
My thoughts on solving this problem:
If the balance factor is 0, return root
else keep removing the root until the tree is completely balanced, and calculate the median of the roots you just removed
Assuming the AVL tree will keep the balance(by definition?)
I've seen some answers suggesting in-order traversal and find median, but I that will require more space and time in my opinion.
Can someone confirm or correct my ideas? thanks!
There are two problems in your suggested approach:
You destroy your tree in the process (or take up twice as much memory for a "backup" copy)
In the worst case, you need quite a lot of root removals to get a completely balanced tree (I think in the worst-case, it would be close to 2^(n-1)-1 removals)... and you'd still need to calculate the median from that.
The answer in your linked question is right and optimal. The usual way to solve this is to construct a Order statistic tree (by holding the number of elements of the left and right sub-tree for each node). Do note, that you have to compensate the numbers accordingly if a rotation of the AVL tree happens.
See IVlad's answer here. Since an AVL tree guarantees an O(log n) Search operation and IVlad's algorithm is essentially a Search operation, you can find the k-th smallest element in O(log n) time and O(1) space (not counting the space for the tree itself).
Assuming your tree is indexed from 0 and has n elements, find the median in the following way:
if n is odd: Find the (n-1)/2-th element and return it
if n is even: Find the n/2-th and (n/2)-1 elements and return their average
Also, if changing the tree (left/right element counts) is not an option, see the second part of the answer you linked to.
Related
I am currently storing a large number of unsigned 32-bit integers in a bit trie (effectively forming a binary tree with a node for each bit in the 32-bit value.) This is very efficient for fast lookup of exact values.
I now want to be able to search for keys that may or may not be in the trie and find the value for the first key less than or equal to the search key. Is this efficiently possible with a bit trie, or should I use a different data structure?
I am using a trie due to its speed and cache locality, and ideally want to sacrifice neither.
For example, suppose the trie has two keys added:
0x00AABBCC
0x00AABB00
and I an now searching for a key that is not present, 0x00AABB11. I would like to find the first key present in the tree with a value <= the search key, which in this case would be the node for 0x00AABB00.
While I've thought of a possible algorithm for this, I am seeking concrete information on if it is efficiently possible and/or if there are known algorithms for this, which will no doubt be better than my own.
We can think bit trie as a binary search tree. In fact, it is a binary search tree. Take the 32-bit trie for example, suppose left child as 0, right child as 1. For the root, the left subtree is for the numbers less than 0x80000000 and the right subtree is for the numbers no less than 0x80000000, so on and so forth. So you can just use the similar the method to find the largest item not larger than the search key in the binary search tree. Just don't worry about the backtracks, it won't backtrack too much and won't change the search complexity.
When you match fails in the bit trie, just backtrack to find the right-most child of the nearest ancestor of the failed node.
If the data is static--you're not adding or removing items--then I'd take a good look at using a simple array with binary search. You sacrifice cache locality, but that might not be catastrophic. I don't see cache locality as an end in itself, but rather a means of making the data structure fast.
You might get better cache locality by creating a balanced binary tree in an array. Position 0 is the root node, position 1 is left node, position 2 is right node, etc. It's the same structure you'd use for a binary heap. If you're willing to allocate another 4 bytes per node, you could make it a left-threaded binary tree so that if you search for X and end up at the next larger value, following that left thread would give you the next smaller value. All told, though, I don't see where this can outperform the plain array in the general case.
A lot depends on how sparse your data is and what the range is. If you're looking at a few thousand possible values in the range 0 to 4 billion, then the binary search looks pretty attractive. If you're talking about 500 million distinct values, then I'd look at allocating a bit array (500 megabytes) and doing a direct lookup with linear backward scan. That would give you very good cache locality.
A bit trie walks 32 nodes in the best case when the item is found.
A million entries in a red-black tree like std::map or java.util.TreeMap would only require log2(1,000,000) or roughly 20 nodes per query, worst case. And you do not always need to go to the bottom of the tree making average case appealing.
When backtracking to find <= the difference is even more pronounced.
The fewer entries you have, the better the case for a red-black tree
At a minimum, I would compare any solution to a red-black tree.
I'm thinking that an in order traversal will run in O(n) time. The only thing better than that would be to have something running in logn time. But I don't see how this could be, considering we have to run at least n times.
Is O(n) the lastest we could do here?
Converting and expanding #C.B.'s comment to an answer:
If you have an AVL tree with n strings in it and you want to print all of them, then you have to do at least Θ(n) total work simply because you have to print out each of the n strings. You can often lower-bound the amount of work required to produce a list or otherwise output a sequence of values simply by counting up how many items are going to be in the list.
We can be even more precise here. Suppose the combined length of all the strings in the tree is L. The time required to print out all the strings in the tree has to be at least Θ(L), since it costs some computational effort to output each individual character. Therefore, we can say that we have to do at least Θ(n + L) work to print out all the strings in the tree.
The bound given here just says that any correct algorithm has to do at least this much work, not that there actually is an algorithm that does this much work. But if you look closely at any of the major tree traversals - inorder, preorder, postorder, level-order - you'll find that they all match this time bound.
Now, one area where you can look for savings is in space complexity. A level-order traversal of the tree might require Ω(n) total space if the tree is perfectly balanced (since it holds a whole layer of the tree in memory and the bottommost layer can have Θ(n) nodes in it), while an inorder, preorder, or postorder traversal would only require O(log n) memory because you only need to store the current access path, which has logarithmic height in an AVL tree.
I feel like this has to exist, but I just can't think of it. Is there a data structure that can one hold a sorted list of values and be searched quickly (maybe log(N) time like an array), and also supports insertion and removal of elements in log(N) or constant time?
This is pretty much the description of a balanced binary search tree, which stores elements in sorted order, allows for O(log n) insertions, deletions, and lookups, and allows for O(n) traversal of all elements.
There are many ways to build a balanced BST - there are red/black trees, AVL trees, scapegoat trees, splay trees, AA trees, treaps, (a, b)-trees, etc. Any of these would solve your problem. Of them, splay trees are probably the easiest to code up, followed by AA-trees and AVL trees.
Hope this helps!
I am not able to figure out the procedure for iterative octree traversal though I have tried approaching it in the way of binary tree traversal. For my problem, I have octree nodes having child and parent pointers and I would like to iterate and only store the leaf nodes in the stack.
Also, is going for iterative traversal faster than recursive traversal?
It is indeed like binary tree traversal, but you need to store a bit of intermediate information. A recursive algorithm will not be slower per se, but use a bit more stack space for O(log8) recursive calls (about 10 levels for 1 billion elements in the octree).
Iterative algorithms will also need the same amount of space to be efficient, but you can place it into the heap it you are afraid that your stack might overflow.
Recursively you would do (pseudocode):
function traverse_rec (octree):
collect value // if there are values in your intermediate nodes
for child in children:
traverse_rec (child)
The easiest way to arrive at an iterative algorithm is to use a stack or queue for depth first or breath first traversal:
function traverse_iter_dfs(octree):
stack = empty
push_stack(root_node)
while not empty (stack):
node = pop(stack)
collect value(node)
for child in children(node):
push_stack(child)
Replace the stack with a queue and you got breath first search. However, we are storing something in the region of O(7*(log8 N)) nodes which we are yet to traverse. If you think about it, that's the lesser evil though, unless you need to traverse really big trees. The only other way is to use the parent pointers, when you are done in a child, and then you need to select the next sibling, somehow.
If you don't store in advance the index of the current node (in respect to it's siblings) though, you can only search all the nodes of the parent in order to find the next sibling, which essentially doubles the amount of work to be done (for each node you don't just loop through the children but also through the siblings). Also, it looks like you at least need to remember which nodes you visited already, for it is in general undecidable whether to descend farther down or return back up the tree otherwise (prove me wrong somebody).
All in all I would recommend against searching for such a solution.
Depends on what your goal is. Are you trying to find whether a node is visible, if a ray will intersect its bounding box, or if a point is contained in the node?
Let's assume that you are doing the last one, checking if a point is/should be contained in the node. I would add a method to the Octnode that takes a point and checks whether or not it lies within the bounding box of the Octnode. If it does return true, else false, pretty simple. From here, call a drill down method that starts at your head node and check each child, simple "for" loop, to see which Octnode it lies in, it can at most be one.
Here is where your iterative vs recursive algorithm comes into play. If you want iterative, just store the pointer to the current node, and swap this pointer from the head node to the one containing your point. Then just keep drilling down till you reach maximal depth or don't find an Octnode containing it. If you want a recursive solution, then you will call this drill down method on the Octnode that you found the point in.
I wouldn't say that iterative versus recursive has much performance difference in terms of speed, but it could have a difference in terms of memory performance. Each time you recurse you add another call depth onto the stack. If you have a large Octree this could result in a large number of calls, possibly blowing your stack.
I have a large binary tree, T. T "matches". Some number of subtrees of T will also match. In fact, the matching subtrees need not even be full subtrees: they can be truncated, too. By truncated subtree, I mean that nodes in the subtree may not contain children all the way down - some nodes that have children may have their children removed.
An example: see this link. The tree represented by poem1, stanza1, stanza2, line3 is an example of a truncated subtree.
Determining if a tree matches requires performing a calculation on that entire tree. It's not progressive.
How the heck do I find all matches?
http://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
sounds roughly like what you're trying to find (except that you're trying this on all subgraphs of an original graph as well, making it even harder). I don't really know how you are defining "matches" (equality, pattern, color coordinated, sticks with chemicals on the end that ignite when struck?), so it might be quite a different problem.