Count of nodes in BST - visual-c++

I am trying to count the number of nodes in a Binary Search Tree and was wondering what the most efficient means was. These are the options that I have found:
store int count in the BST Class
store int children in each node of the tree which stores the number of children under it
write a method that counts the number of Nodes in the BST
if using option 3, I've written:
int InOrder {
Node *cur = root;
int count = 0;
Stack *s = null;
bool done = false;
while(!done) {
if(cur != NULL) {
s.push(cur);
cur = cur->left;
}
else {
if(!s.IsEmpty()) {
cur = s.pop();
count++;
cur = cur->right;
}
else {
done = true;
}
}
}
return count;
}
but from looking at it, it seems like it would get stuck in an infinite loop between cur = cur->left; and cur = cur->right;
So which option is the most efficient and if it is option 3, then will this method work?

I think the first option is the quickest and it only requires O(1) space to achieve this. However whenever you insert/delete an item, you need to keep updating this value.
It will take O(1) time to get the number of all the nodes.
The second option would make this program way too complicated since deleting/inserting a node somewhere would have to update all of its ancestors. Either you add a parent pointer so you can adequately update each one of the ancestors, or you need to go through all the nodes in the tree and update the numbers again. Anyway I think this would be the worst option of all three.
The third option is good if you don't call this many times since the first option is a lot quicker, O(1), than this option. This will take O(n) since you need to go through every single node to check the count.
In terms of your code, I think it's easier to write in a recursive way like below:
int getCount(Node* n)
{
if (!n)
return 0;
return 1 + getCount(n->left) + getCount(n->right);
}
Hope this helps!

Related

How do I simple remove duplicates in my vector?

I am new to coding and struggling with a section in my code. I am at the part where i want to remove duplicate int values from my vector.
my duplicated vector contains: 1 1 2 1 4
my goal is to get a deduplicated vector: 1, 2, 4.
This is what I have so far, It also needs to be a rather simple solution. No pointers and fancy stuff as I still need to study those in the future.
for(int i = 0; i < duplicatedVector.size(); i++) {
int temp = duplicatedVector.at(i);
int counter = 0;
if(temp == duplicatedVector.at(i)) {
counter++;
if(counter > 1) {
deduplicatedVector.push_back(temp);
}
}
}
Could anyone tell me what I do wrong ? I genuinly am trying to iterate through the vector and delete duplicated int, in the given order.
Your algorithm is not well-enough thought out.
Break it up:
for each element of the original vector:
is it in the result vector?
yes: do nothing
no: add it to the result vector
You have your (1) loop, but the (2) part is confused. The result vector is not the same as the original vector, and is not to be indexed the same.
To determine whether an element is in a vector, you need a loop. Loop through your result vector to see if the element is in it. If you find it, it is, so break the inner loop. If you do not, you don't.
You can tell whether or not you found a duplicate by the final value of your inner loop index (the index into the result vector). If it equals result.size() then no duplicate was found.
Clearer variable naming might help as well. You are calling your original/source vector duplicatedVector, and your result vector deduplicatedVector. Even hasDuplicates and noDuplicates would be easier to mentally parse.
You could use a set since it eliminates duplicates:
#include <bits/stdc++.h>
using namespace std;
int main () {
vector<int> vec = vector<int>();
vector<int> dedupl = vector<int>();
vec.push_back(2);
vec.push_back(4);
vec.push_back(2);
vec.push_back(7);
vec.push_back(34);
vec.push_back(34);
set<int> mySet = set<int>();
for (int i = 0; i < vec.size(); i++) {
mySet.insert(vec[i]);
}
for (int elem : mySet) {
dedupl.push_back(elem);
}
for (int elem : dedupl) {
cout << elem << " ";
}
}

Find position of item in list using Binary Search

The question is:
Given a list of String, find a specific string in the list and return
its index in the ordered list of String sorted by mergesort. There are
two cases:
The string is in the list, return the index it should be in, in the ordered list.
The String is NOT in the list, return the index it is supposed to be in, in the ordered list.
Here is my my code, I assume that the given list is already ordered.
For 2nd case, how do I use mergesort to find the supposed index? I would appreciate some clues.
I was thinking to get a copy of the original list first, sort it, and get the index of the string in the copy list. Here I got stuck... do I use mergesort again to get the index of non-existing string in the copy list?
public static int BSearch(List<String> s, String a) {
int size = s.size();
int half = size / 2;
int index = 0;
// base case?
if (half == 0) {
if (s.get(half) == a) {
return index;
} else {
return index + 1;
}
}
// with String a
if (s.contains(a)) {
// on the right
if (s.indexOf(s) > half) {
List<String> rightHalf = s.subList(half + 1, size);
index += half;
return BSearch(rightHalf, a);
} else {
// one the left
List<String> leftHalf = s.subList(0, half - 1);
index += half;
return BSearch(leftHalf, a);
}
}
return index;
}
When I run this code, the index is not updated. I wonder what is wrong here. I only get 0 or 1 when I test the code even with the string in the list.
Your code only returns 0 or 1 because you don't keep track of your index for each recursive call, instead of resetting to 0 each time. Also, to find where the non-existent element should be, consider the list {0,2,3,5,6}. If we were to run a binary search to look for 4 here, it should stop at the index where element 5 is. Hope that's enough to get you started!

Algorithm for doing many substring reversals?

Suppose I have a string S of length N, and I want to perform M of the following operations:
choose 1 <= L,R <= N and reverse the substring S[L..R]
I am interested in what the final string looks like after all M operations. The obvious approach is to do the actual swapping, which leads to O(MN) worst-case behavior. Is there a faster way? I'm trying to just keep track of where an index ends up, but I cannot find a way to reduce the running time (though I have a gut feeling O(M lg N + N) -- for the operations and the final reading -- is possible).
Yeah, it's possible. Make a binary tree structure like
struct node {
struct node *child[2];
struct node *parent;
char label;
bool subtree_flipped;
};
Then you can have a logical getter/setter for left/right child:
struct node *get_child(struct node *u, bool right) {
return u->child[u->subtree_flipped ^ right];
}
void set_child(struct node *u, bool right, struct node *c) {
u->child[u->subtree_flipped ^ right] = c;
if (c != NULL) { c->parent = u; }
}
Rotations have to preserve flipped bits:
struct node *detach(struct node *u, bool right) {
struct node *c = get_child(u, right);
if (c != NULL) { c->subtree_flipped ^= u->subtree_flipped; }
return c;
}
void attach(struct node *u, bool right, struct node *c) {
set_child(u, right, c);
if (c != NULL) { c->subtree_flipped ^= u->subtree_flipped; }
}
// rotates one of |p|'s child up.
// does not fix up the pointer to |p|.
void rotate(struct node *p, bool right) {
struct node *u = detach(p, right);
struct node *c = detach(u, !right);
attach(p, right, c);
attach(u, !right, p);
}
Implement splay with rotations. It should take a "guard" pointer that is treated as a NULL parent for the purpose of splaying, so that you can splay one node to the root and another to its right child. Do this and then you can splay both endpoints of the flipped region and then toggle the flip bits for the root and the two subtrees corresponding to segments left unaffected.
Traversal looks like this.
void traverse(struct node *u, bool flipped) {
if (u == NULL) { return; }
flipped ^= u->subtree_flipped;
traverse(u->child[flipped], flipped);
visit(u);
traverse(u->child[!flipped], flipped);
}
Splay tree may help you, it supports reverse operation in an array, with total complexity O(mlogn)
#F. Ju is right, splay trees are one of the best data structures to achieve your goal.
However, if you don't want to implement them, or a solution in O((N + M) * sqrt(M)) is good enough, you can do the following:
We will perform sqrt(M) consecutive queries and then rebuilt the array from the scratch in O(N) time.
In order to do that, for each query, we will store the information that the queried segment [a, b] is reversed or not (if you reverse some range of elements twice, they become unreversed).
The key here is to maintain the information for disjoint segments here. Notice that since we are performing at most sqrt(M) queries before rebuilding the array, we will have at most sqrt(M) disjoint segments and we can perform query operation on sqrt(M) segments in sqrt(M) time. Let me know if you need a detailed explanation on how to "reverse" these disjoint segments.
This trick is very useful while solving problems like that and it is worth to know it.
UPDATE:
I solved the problem exactly corresponding to yours on HackerRank, during their contest, using the method I described.
Here is the problem
Here is my solution in C++.
Here is the discussion about the problem and a brief description of my method, please check my 3rd message there.
I'm trying to just keep track of where an index ends up
If you're just trying to follow one entry of the starting array, it's easy to do that in O(M) time.
I was going to just write pseudocode, but no hand-waving was needed so I ended up with what's probably valid C++.
// untested C++, but it does compile to code that looks right.
struct swap {
int l, r;
// or make these non-member functions for C
bool covers(int pos) { return l <= pos && pos <= r; }
int apply_if_covering(int pos) {
// startpos - l = r - endpos;
// endpos = l - startpos + r
if(covers(pos))
pos = l - pos + r;
return pos;
}
};
int follow_swaps (int pos, int len, struct swap swaps[], int num_swaps)
{
// pos = starting position of the element we want to track
// return value = where it will be after all the swaps
for (int i = 0 ; i < num_swaps ; i++) {
pos = swaps[i].apply_if_covering(pos);
}
return pos;
}
This compiles to very efficient-looking code.

Threadsafe mutable collection with fast elements removal and random get

I need a thread safe data structure with three operations: remove, getRandom, reset.
I have only two ideas by now.
First: Seq in syncronized var.
val all: Array[String] = ... //all possible.
var current: Array[String] = Array.empty[String]
def getRandom(): = {
val currentAvailable = current
currentAvailable(Random.nextInt(currentAvailable.length))
}
def remove(s: String) = {
this.syncronized {
current = current diff Seq(s)
}
}
def reset(s: String) = {
this.syncronized {
current = all
}
}
Second:
Maintain some Map[String,Boolean], there bool is true when element currently is present. The main problem is to make a fast getRandom method (not something like O(n) in worst case).
Is there a better way(s) to implement this?
Scala's Trie is a lock free data structure that supports snapshots (aka your currentAvailable) and fast removals
Since I'm not a Scala expert so this answer is general as an example I used Java coding.
in short the answer is YES.
if you use a map such as :
Map<Integer,String> map=new HashMap<Integer,String>(); //is used to get random in constant time
Map<String,Integer> map1=new HashMap<String,Integer>(); //is used to remove in constant time
to store date,
the main idea is to keep the key( in this case the integer) synchronized to be {1 ... size of map}
for example to fill this structure, you need something like this:
int counter=0; //this is a global variable
for(/* all your string (s) in all */ ){
map.put(counter++, s);
}
//then , if you want the removal to be in constant time you need to fill the second map
for(Entry e : map.EntrySet(){
map1.put(e.getValue(),e.getKey());
}
The above code is the initialization. everytime you want to set things you need to do that
then you can achieve a random value with O(1) complexity
String getRandom(){
int i; /*random number between 0 to counter*/
return map.get(i);
}
Now to remove things you use map1 to achive it in constant time O(1);
void remove(String s){
if(!map1.containsKey(s))
return; //s doesn't exists
String val=map.get(counter); //value of the last
map.remove(counter) //removing the last element
int thisCounter= map1.get(s); //pointer to this
map1.remove(s); // remove from map1
map.remove(counter); //remove from map
map1.put(thisCounter,val); //the val of the last element with the current pointer
counter--; //reducing the counter by one
}
obviously the main issue here is to keep the synchronization ensured. but by carefully analyzing the code you should be able to do that.

Iterative deepening search : Is it recursive?

I've searched the internet about the IDS algorithm and I keep finding some example but they are all with recursion, and as I understood iterative is not recursive..
So can you please give me some examples for IDS algorithm ?(implementation would be great and without recursion)
Thanks in advance! you will be a life saver!
The iterative part is not recursive: at the top it is more or less:
int limit = 0;
Solution sol;
do {
limit++;
sol = search(problem,limit);
} while(sol == null);
//do something with the solution.
This said, in most cases searching for a solution is indeed implemented recursively:
Solution search(Problem problem, int limit) {
return search(problem,0,limit);
}
Solution search (Problem problem, int price, int limit) {
if(problem.solved) {
return problem.getSolution();
}
for(int value = 0; value < valuerange; value++) {
problem.assignVariable(value);
int newprice = price + problem.price();
if(price < limit) {
Solution solution = search(problem,newprice,limit);
if(s != null) {
return solution;
}
}
problem.backtrackVariable();
}
return null;
}
But there exists an automatic procedure to turn any recursive program into a non-recursive one.
If you are thinking in algorithm terms (not just implementation), this would mean applying iteration at all nodes of the search tree, instead of just at the root node.
In the case of chess programs, this does have some bennefits. It improves move ordering, even in the case where a branch that was previously pruned by alpha-beta is later included. The cost of the extra search is kept low by using a transposition table.
https://www.chessprogramming.org/Internal_Iterative_Deepening

Resources