Default return value of a function in BASIC - basic

Following is an example program in BASIC. Can someone tell me what this function returns if the marked condition is not true? I have to port the program to C++ and need to understand it. I have no BASIC knowledge - please bear with simple question.
FUNCTION CheckPoss (u)
tot = tot + 1
f = 0
SELECT CASE u
CASE 2
f = f + CheckIntersection(1, 3, 2, 1) 'A
CASE 3
f = f + CheckIntersection(2, 3, 3, 1) 'B
END SELECT
IF f = 0 THEN <============== This condition if true,
CheckPoss = 1 <============== then return value is 1
IF u = 9 THEN
PrintSolution
END IF
END IF
END FUNCTION

This is a good example of bad programming. First some unknown global variable is changed in this function. "tot = tot + 1"! Second line "F" is another unknown global variable is assigned "0". Or is this the only place this variable is used? In that case it is a variant implicitly declared here. Use a dim to declare it. It is legal in basic to do this. Globals should be passed as arguments to the function like this:
function CheckPoss(u as integer, tot as integer) as integer
dim f as integer
f=0
It is all about good practice so the input is clear and output is clear and all variable assignments should be through arguments passed to the function.
The return type is not declared either. Is this visual basic? or is it some older basic? Anyway the return type is a variant in case of visual basic. Older basic would be an integer type.
The output from this function will mostly likely be a zero if the condition is not met! That should also be clear in the code and it is not clear as it is, and I understand why you ask. I am amazed this piece of code comes from a working program.
Good luck with your project!

I don't know exactly that this function do.
On VB.net, the function follow the structure:
Public function CheckPoss(Byval u as integer)
... ' Just commands
return u ' Or other variable
end function
If not exist the 'return' command, the return of function will be 'null' character.
On C, The function will be:
int CheckPoss(int u){
tot++; // Increment tot variable (need be declared)
int f = 0;
switch(u){
case 2:
f += CheckIntersection(1, 3, 2, 1); // A
break;
case 3:
f += CheckIntersection(2, 3, 3, 1); // B
break;
}
if (f == 0){
if (u == 9){
PrintSolution();
}
return 1;
}
}
The return command need be the last command of this function. At case f != 0, the function must return trash (some value or character).
My suggestion is:
int CheckPoss(int u){
tot++; // I think that this must count how times you call this function
int f;
if(u == 2){
f = CheckIntersection(1, 3, 2, 1); // A
}else if(u == 3){
f = CheckIntersection(2, 3, 3, 1); // B
}else{
f = 1; // Case else
}
if (f == 0){
if (u == 9)
PrintSolution();
return 1;
}
}

Related

What is the worst case for binary search

Where should an element be located in the array so that the run time of the Binary search algorithm is O(log n)?
The first or last element will give the worst case complexity in binary search as you'll have to do maximum no of comparisons.
Example:
1 2 3 4 5 6 7 8 9
Here searching for 1 will give you the worst case, with the result coming in 4th pass.
1 2 3 4 5 6 7 8
In this case, searching for 8 will give the worst case, with the result coming in 4 passes.
Note that in the second case searching for 1 (the first element) can be done in just 3 passes. (compare 1 & 4, compare 1 & 2 and finally 1)
So, if no. of elements are even, the last element gives the worst case.
This is assuming all arrays are 0 indexed. This happens due to considering the mid as float of (start + end) /2.
// Java implementation of iterative Binary Search
class BinarySearch
{
// Returns index of x if it is present in arr[],
// else return -1
int binarySearch(int arr[], int x)
{
int l = 0, r = arr.length - 1;
while (l <= r)
{
int m = l + (r-l)/2;
// Check if x is present at mid
if (arr[m] == x)
return m;
// If x greater, ignore left half
if (arr[m] < x)
l = m + 1;
// If x is smaller, ignore right half
else
r = m - 1;
}
// if we reach here, then element was
// not present
return -1;
}
// Driver method to test above
public static void main(String args[])
{
BinarySearch ob = new BinarySearch();
int arr[] = {2, 3, 4, 10, 40};
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, x);
if (result == -1)
System.out.println("Element not present");
else
System.out.println("Element found at " +
"index " + result);
}
}
Time Complexity:
The time complexity of Binary Search can be written as
T(n) = T(n/2) + c
The above recurrence can be solved either using Recurrence T ree method or Master method. It falls in case II of Master Method and solution of the recurrence is Theta(Logn).
Auxiliary Space: O(1) in case of iterative implementation. In case of recursive implementation, O(Logn) recursion call stack space.

LongAdder Striped64 wasUncontended implementation detail

This is a question not about how LongAdder works, it's about an intriguing implementation detail that I can't figure out.
Here is the code from Striped64 (I've cut out some parts and left the relevant parts for the question):
final void longAccumulate(long x, LongBinaryOperator fn,
boolean wasUncontended) {
int h;
if ((h = getProbe()) == 0) {
ThreadLocalRandom.current(); // force initialization
h = getProbe();
wasUncontended = true;
}
boolean collide = false; // True if last slot nonempty
for (;;) {
Cell[] as; Cell a; int n; long v;
if ((as = cells) != null && (n = as.length) > 0) {
if ((a = as[(n - 1) & h]) == null) {
//logic to insert the Cell in the array
}
// CAS already known to fail
else if (!wasUncontended) {
wasUncontended = true; // Continue after rehash
}
else if (a.cas(v = a.value, ((fn == null) ? v + x : fn.applyAsLong(v, x)))){
break;
}
A lot of things from code are clear to me, except for the :
// CAS already known to fail
else if (!wasUncontended) {
wasUncontended = true; // Continue after rehash
}
Where does this certainty that the following CAS will fail?
This is really confusing for me at least, because this check only makes sense for a single case : when some Thread enters the longAccumulate method for the n-th time (n > 1) and the busy spin is at it's first cycle.
It's like this code is saying : if you (some Thread) have been here before and you have some contention on a particular Cell slot, don't try to CAS your value to the already existing one, but instead rehash the probe.
I honestly hope I will make some sense for someone.
It's not that it will fail, it's more that it has failed. The call to this method is done by the LongAdder add method.
public void add(long x) {
Cell[] as; long b, v; int m; Cell a;
if ((as = cells) != null || !casBase(b = base, b + x)) {
boolean uncontended = true;
if (as == null || (m = as.length - 1) < 0 ||
(a = as[getProbe() & m]) == null ||
!(uncontended = a.cas(v = a.value, v + x)))
longAccumulate(x, null, uncontended);
}
}
The first set of conditionals is related to existence of the long Cells. If the necessary cell doesn't exist, then it will try to accumulate uncontended (as there was no attempt to add) by atomically adding the necessary cell and then adding.
If the cell does exist, try to add (v + x). If the add failed then there was some form of contention, in that case try to do the accumulating optimistically/atomically (spin until successful)
So why does it have
wasUncontended = true; // Continue after rehash
My best guess is that with heavy contention, it will try to give the running thread time to catch up and will force a retry of the existing cells.

Maximum element in array which is equal to product of two elements in array

We need to find the maximum element in an array which is also equal to product of two elements in the same array. For example [2,3,6,8] , here 6=2*3 so answer is 6.
My approach was to sort the array and followed by a two pointer method which checked whether the product exist for each element. This is o(nlog(n)) + O(n^2) = O(n^2) approach. Is there a faster way to this ?
There is a slight better solution with O(n * sqrt(n)) if you are allowed to use O(M) memory M = max number in A[i]
Use an array of size M to mark every number while you traverse them from smaller to bigger number.
For each number try all its factors and see if those were already present in the array map.
Here is a pseudo code for that:
#define M 1000000
int array_map[M+2];
int ans = -1;
sort(A,A+n);
for(i=0;i<n;i++) {
for(j=1;j<=sqrt(A[i]);j++) {
int num1 = j;
if(A[i]%num1==0) {
int num2 = A[i]/num1;
if(array_map[num1] && array_map[num2]) {
if(num1==num2) {
if(array_map[num1]>=2) ans = A[i];
} else {
ans = A[i];
}
}
}
}
array_map[A[i]]++;
}
There is an ever better approach if you know how to find all possible factors in log(M) this just becomes O(n*logM). You have to use sieve and backtracking for that
#JerryGoyal 's solution is correct. However, I think it can be optimized even further if instead of using B pointer, we use binary search to find the other factor of product if arr[c] is divisible by arr[a]. Here's the modification for his code:
for(c=n-1;(c>1)&& (max==-1);c--){ // loop through C
for(a=0;(a<c-1)&&(max==-1);a++){ // loop through A
if(arr[c]%arr[a]==0) // If arr[c] is divisible by arr[a]
{
if(binary_search(a+1, c-1, (arr[c]/arr[a]))) //#include<algorithm>
{
max = arr[c]; // if the other factor x of arr[c] is also in the array such that arr[c] = arr[a] * x
break;
}
}
}
}
I would have commented this on his solution, unfortunately I lack the reputation to do so.
Try this.
Written in c++
#include <vector>
#include <algorithm>
using namespace std;
int MaxElement(vector< int > Input)
{
sort(Input.begin(), Input.end());
int LargestElementOfInput = 0;
int i = 0;
while (i < Input.size() - 1)
{
if (LargestElementOfInput == Input[Input.size() - (i + 1)])
{
i++;
continue;
}
else
{
if (Input[i] != 0)
{
LargestElementOfInput = Input[Input.size() - (i + 1)];
int AllowedValue = LargestElementOfInput / Input[i];
int j = 0;
while (j < Input.size())
{
if (Input[j] > AllowedValue)
break;
else if (j == i)
{
j++;
continue;
}
else
{
int Product = Input[i] * Input[j++];
if (Product == LargestElementOfInput)
return Product;
}
}
}
i++;
}
}
return -1;
}
Once you have sorted the array, then you can use it to your advantage as below.
One improvement I can see - since you want to find the max element that meets the criteria,
Start from the right most element of the array. (8)
Divide that with the first element of the array. (8/2 = 4).
Now continue with the double pointer approach, till the element at second pointer is less than the value from the step 2 above or the match is found. (i.e., till second pointer value is < 4 or match is found).
If the match is found, then you got the max element.
Else, continue the loop with next highest element from the array. (6).
Efficient solution:
2 3 8 6
Sort the array
keep 3 pointers C, B and A.
Keeping C at the last and A at 0 index and B at 1st index.
traverse the array using pointers A and B till C and check if A*B=C exists or not.
If it exists then C is your answer.
Else, Move C a position back and traverse again keeping A at 0 and B at 1st index.
Keep repeating this till you get the sum or C reaches at 1st index.
Here's the complete solution:
int arr[] = new int[]{2, 3, 8, 6};
Arrays.sort(arr);
int n=arr.length;
int a,b,c,prod,max=-1;
for(c=n-1;(c>1)&& (max==-1);c--){ // loop through C
for(a=0;(a<c-1)&&(max==-1);a++){ // loop through A
for(b=a+1;b<c;b++){ // loop through B
prod=arr[a]*arr[b];
if(prod==arr[c]){
System.out.println("A: "+arr[a]+" B: "+arr[b]);
max=arr[c];
break;
}
if(prod>arr[c]){ // no need to go further
break;
}
}
}
}
System.out.println(max);
I came up with below solution where i am using one array list, and following one formula:
divisor(a or b) X quotient(b or a) = dividend(c)
Sort the array.
Put array into Collection Col.(ex. which has faster lookup, and maintains insertion order)
Have 2 pointer a,c.
keep c at last, and a at 0.
try to follow (divisor(a or b) X quotient(b or a) = dividend(c)).
Check if a is divisor of c, if yes then check for b in col.(a
If a is divisor and list has b, then c is the answer.
else increase a by 1, follow step 5, 6 till c-1.
if max not found then decrease c index, and follow the steps 4 and 5.
Check this C# solution:
-Loop through each element,
-loop and multiply each element with other elements,
-verify if the product exists in the array and is the max
private static int GetGreatest(int[] input)
{
int max = 0;
int p = 0; //product of pairs
//loop through the input array
for (int i = 0; i < input.Length; i++)
{
for (int j = i + 1; j < input.Length; j++)
{
p = input[i] * input[j];
if (p > max && Array.IndexOf(input, p) != -1)
{
max = p;
}
}
}
return max;
}
Time complexity O(n^2)

Finding minimum moves required for making 2 strings equal

This is a question from one of the online coding challenge (which has completed).
I just need some logic for this as to how to approach.
Problem Statement:
We have two strings A and B with the same super set of characters. We need to change these strings to obtain two equal strings. In each move we can perform one of the following operations:
1. swap two consecutive characters of a string
2. swap the first and the last characters of a string
A move can be performed on either string.
What is the minimum number of moves that we need in order to obtain two equal strings?
Input Format and Constraints:
The first and the second line of the input contains two strings A and B. It is guaranteed that the superset their characters are equal.
1 <= length(A) = length(B) <= 2000
All the input characters are between 'a' and 'z'
Output Format:
Print the minimum number of moves to the only line of the output
Sample input:
aab
baa
Sample output:
1
Explanation:
Swap the first and last character of the string aab to convert it to baa. The two strings are now equal.
EDIT : Here is my first try, but I'm getting wrong output. Can someone guide me what is wrong in my approach.
int minStringMoves(char* a, char* b) {
int length, pos, i, j, moves=0;
char *ptr;
length = strlen(a);
for(i=0;i<length;i++) {
// Find the first occurrence of b[i] in a
ptr = strchr(a,b[i]);
pos = ptr - a;
// If its the last element, swap with the first
if(i==0 && pos == length-1) {
swap(&a[0], &a[length-1]);
moves++;
}
// Else swap from current index till pos
else {
for(j=pos;j>i;j--) {
swap(&a[j],&a[j-1]);
moves++;
}
}
// If equal, break
if(strcmp(a,b) == 0)
break;
}
return moves;
}
Take a look at this example:
aaaaaaaaab
abaaaaaaaa
Your solution: 8
aaaaaaaaab -> aaaaaaaaba -> aaaaaaabaa -> aaaaaabaaa -> aaaaabaaaa ->
aaaabaaaaa -> aaabaaaaaa -> aabaaaaaaa -> abaaaaaaaa
Proper solution: 2
aaaaaaaaab -> baaaaaaaaa -> abaaaaaaaa
You should check if swapping in the other direction would give you better result.
But sometimes you will also ruin the previous part of the string. eg:
caaaaaaaab
cbaaaaaaaa
caaaaaaaab -> baaaaaaaac -> abaaaaaaac
You need another swap here to put back the 'c' to the first place.
The proper algorithm is probably even more complex, but you can see now what's wrong in your solution.
The A* algorithm might work for this problem.
The initial node will be the original string.
The goal node will be the target string.
Each child of a node will be all possible transformations of that string.
The current cost g(x) is simply the number of transformations thus far.
The heuristic h(x) is half the number of characters in the wrong position.
Since h(x) is admissible (because a single transformation can't put more than 2 characters in their correct positions), the path to the target string will give the least number of transformations possible.
However, an elementary implementation will likely be too slow. Calculating all possible transformations of a string would be rather expensive.
Note that there's a lot of similarity between a node's siblings (its parent's children) and its children. So you may be able to just calculate all transformations of the original string and, from there, simply copy and recalculate data involving changed characters.
You can use dynamic programming. Go over all swap possibilities while storing all the intermediate results along with the minimal number of steps that took you to get there. Actually, you are going to calculate the minimum number of steps for every possible target string that can be obtained by applying given rules for a number times. Once you calculate it all, you can print the minimum number of steps, which is needed to take you to the target string. Here's the sample code in JavaScript, and its usage for "aab" and "baa" examples:
function swap(str, i, j) {
var s = str.split("");
s[i] = str[j];
s[j] = str[i];
return s.join("");
}
function calcMinimumSteps(current, stepsCount)
{
if (typeof(memory[current]) !== "undefined") {
if (memory[current] > stepsCount) {
memory[current] = stepsCount;
} else if (memory[current] < stepsCount) {
stepsCount = memory[current];
}
} else {
memory[current] = stepsCount;
calcMinimumSteps(swap(current, 0, current.length-1), stepsCount+1);
for (var i = 0; i < current.length - 1; ++i) {
calcMinimumSteps(swap(current, i, i + 1), stepsCount+1);
}
}
}
var memory = {};
calcMinimumSteps("aab", 0);
alert("Minimum steps count: " + memory["baa"]);
Here is the ruby logic for this problem, copy this code in to rb file and execute.
str1 = "education" #Sample first string
str2 = "cnatdeiou" #Sample second string
moves_count = 0
no_swap = 0
count = str1.length - 1
def ends_swap(str1,str2)
str2 = swap_strings(str2,str2.length-1,0)
return str2
end
def swap_strings(str2,cp,np)
current_string = str2[cp]
new_string = str2[np]
str2[cp] = new_string
str2[np] = current_string
return str2
end
def consecutive_swap(str,current_position, target_position)
counter=0
diff = current_position > target_position ? -1 : 1
while current_position!=target_position
new_position = current_position + diff
str = swap_strings(str,current_position,new_position)
# p "-------"
# p "CP: #{current_position} NP: #{new_position} TP: #{target_position} String: #{str}"
current_position+=diff
counter+=1
end
return counter,str
end
while(str1 != str2 && count!=0)
counter = 1
if str1[-1]==str2[0]
# p "cross match"
str2 = ends_swap(str1,str2)
else
# p "No match for #{str2}-- Count: #{count}, TC: #{str1[count]}, CP: #{str2.index(str1[count])}"
str = str2[0..count]
cp = str.rindex(str1[count])
tp = count
counter, str2 = consecutive_swap(str2,cp,tp)
count-=1
end
moves_count+=counter
# p "Step: #{moves_count}"
# p str2
end
p "Total moves: #{moves_count}"
Please feel free to suggest any improvements in this code.
Try this code. Hope this will help you.
public class TwoStringIdentical {
static int lcs(String str1, String str2, int m, int n) {
int L[][] = new int[m + 1][n + 1];
int i, j;
for (i = 0; i <= m; i++) {
for (j = 0; j <= n; j++) {
if (i == 0 || j == 0)
L[i][j] = 0;
else if (str1.charAt(i - 1) == str2.charAt(j - 1))
L[i][j] = L[i - 1][j - 1] + 1;
else
L[i][j] = Math.max(L[i - 1][j], L[i][j - 1]);
}
}
return L[m][n];
}
static void printMinTransformation(String str1, String str2) {
int m = str1.length();
int n = str2.length();
int len = lcs(str1, str2, m, n);
System.out.println((m - len)+(n - len));
}
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
String str1 = scan.nextLine();
String str2 = scan.nextLine();
printMinTransformation("asdfg", "sdfg");
}
}

calculating the sum of nodes in a single verticle line of a binary tree

For a binary tree i want to get the sum of all nodes that fall in a single verticle line. I want the sum of nodes in each verticle node
A
/ \
B C
/ \ / \
D E F G
/ \
H I
IF you look at above tee
line 0 A E F so sum = A+E+F
line -1 B I so sum = B +I
line 1 C so sum = C
line 2 G so sum = G
I implemented following algorithm
Map<Integer,Integere> mp = new HashMap<Integer,Integer>()
calculate(root,0);
void calculate(Node node, int pos){
if(node==null)
return ;
if(mp.containsKey(pos) ){
int val = mp.get(pos) + node.data;
mp.put(pos,val);
}
else{
mp.put(pos,node.data);
}
calculate(node.left,pos-1);
calculate(node.right,pos+1);
}
I think the above algo is fine.Can
any one confirm?
Also how can i do it without using
HashMap,arraylist or any such
collection datatype of java.One
method is two is 2 arrays one for
storing negative indexes(mapped to
positive) and one for positive
indexs(right side of root).But we
dont know what the size of array
will be.
One approach is to use doubly link
list and add a node on right/left
movement if necessary. Am not
getting how can i implement this
approach? Any other simple/more time
efficient approach?
Is the complexity of the above code
i imolmeted is O(n)? (am not good at
analysing time complexity , so
asking )
C++ code
int vertsum(Node* n, int cur_level, int target_level)
{
if (!n)
return 0;
int sum = 0;
if (cur_level == target_level)
sum = n->value;
return sum +
vertsum(n->left, cur_level-1, target_level) +
vertsum(n->right, cur_level+1, target_level);
}
invocation example:
vertsum(root, 0, 1);
EDIT:
After clarifying the requirements, here the suggested code. Note that this is C++'ish and not exactly using Java's or C++'s standard API for lists, but you should get the idea. I assume that addNodeBefore and addNodeAfter initialize node's data (i.e. ListNode::counter)
void vertsum(TreeNode* n, int level, ListNode& counter)
{
if (!n)
return;
counter.value += n->value;
counter.index = level;
if (! counter.prev)
addNodeBefore(counter);
vertsum(n->left, level-1, counter.prev);
if (! counter.next)
addNodeAfter(counter);
vertsum(n->right, level+1, counter.next);
return;
}
You could visit the binary tree in depth-first postorder, and use an offset to keep track of how far you moved to the left/right with respect to your starting node. Every time you move to the left, you decrement the offset, and every time you move to the right you increment the offset. If your visit procedure is called with an offset of 0, then it means that the node being visited has the same offset of your starting node (i.e. it's in the same column), and so you must add its value.
Pseudocode:
procedure visit (node n, int offset) {
sumleft = 0
sumright = 0
if (n.left != null)
sumleft = visit(n.left, offset - 1)
if (n.right != null)
sumright = visit(n.right, offset + 1)
if (offset == 0)
return n.value + sumleft + sumright
else
return sumleft + sumright;
}
For example, if you call
visit(A, 0)
you will get the following calls:
visit(A, 0) -> E.value + F.value + A.value
visit(B, -1) -> E.value
visit(D, -2) -> 0
visit(H, -3) -> 0
visit(I, +2) -> 0
visit(E, 0) -> E.value
visit(C, +1) -> F.value
visit(F, 0) -> F.value
visit(G, +1) -> 0
Another example, starting from node B:
visit(B, 0)
visit(D, -1)
visit(H, -2)
visit(I, 0) -> here we return I.value
visit(E, +1)
when recursion goes back to the initial call visit(B, 0) we have sumleft = I.value and sumright = 0, so we return the final result B.value + I.value, as expected.
Complexity of O(n), because you visit once all nodes of your tree rooted at the starting node.
After think about the above algorithm, I realize it has a limitation, which becomes evident when we consider a more complex tree like the following:
In this case visit(B, 0) would still return B.value + I.value, but this is not the expected result, because N is also on the same column. The following algorithm should cope with this problem:
procedure visit(node n, int c, int t) {
sumleft = 0;
sumright = 0;
if (n.left != null)
sumleft = visit(n.left, c - 1, t)
if (n.right != null)
sumright = visit(n.right, c + 1, t)
if (c == t)
return n.value + sumleft + sumright;
else
return sumleft + sumright;
}
The idea is essentially the same, but we have now a parameter c which gives the current column, and a parameter t which is the target column. If we want the sum of the elements in the B column, then we can call visit(A, 0, -1), that is we always start our visit from node A (the root's tree), which is at column 0, and our target is column -1. We get the following:
Therefore visit(A, 0, -1) = B + I + N as expected.
Complexity is always O(n), where n is the number of nodes in the tree, because we visit the entire tree with depth-first postorder, and we process each node only once.
If we want to compute the sum of every column, we can use the following algorithm
procedure visit(node n, int c) {
if (n.left != null)
S{c} += n.value;
visit(n.left, c - 1)
visit(n.right, c + 1)
}
and call once visit(A, 0), where A is the root node. Note that S{...} in the algorithm is a map whose keys are the columns numbers (..., -2, -1, 0, 1, 2, ...) and whose values (at the end of the algorithm) are the sums of the values of nodes in that column (S{1} will be the sum of nodes in column 1). We can also use an array, instead of a map, provided that we pay attention to the indexes (arrays have no negative indexes). The algorithm is still O(n), because we traverse the entire tree only once. However, in this case we need additional space to store the sum for all columns (the map, or the array). If I'm not mistaken a binary tree of height h will have 2*h + 1 columns.
What about the following? (Inside your node class, assuming getData returns the integer value, hasRight() and hasLeft() are boolean values indicating whether a right/left branch exists and getRight() and getLeft() return the next node in the right/left branch.
public int calculateVerticalLine(int numLine) {
return calculateVerticalLineRecursive(numLine, 0);
}
protected int calculateVerticalLineRecursive(int numLine, int curPosition) {
int result = 0;
if(numLine == curPosition) result += this.getData();
if(hasRight()) result += getRight().calculateVerticalLineRecursive(numLine, curPosition+1);
if(hasLeft()) result += getLeft().calculateVerticalLineRecursive(numLine, curPosition-1);
return result;
}
public class Solution {
public static int getSum(BinaryTreeNode<Integer> root) {
//Your code goes here.
if(root==null){
return 0;
}
int leftnodesum=getSum(root.left);
int rightnodesum=getSum(root.right);
return root.data+leftnodesum+rightnodesum;
}
}

Resources