How many basic operations in the line:
int a = b++;
Since b is being incremented by one AND being stored in int a, does that mean there's 2? I just want to make sure because at first I've always considered this to be just one operation.
thanks!
There are two operations, but first operation is =, where value of bis assigned to a and next is increment ++. As you can see it is postincrement and incrementation is side-efect. On the other hand, if it vas preincrement, int a = ++b, first operation would be increment and second assigning that incremented value of b to a.
Related
I have a string of length N with 2 bits. I am trying to find a function to order these strings. For example:
F(110) = 1
F(101) = 2
F(011) = 3
The strategy I adopted was labeling the bits by their position, so that for the first case K=1 and L=2 and hence
F(1,2) = 1
F(1,3) = 2
F(2,3) = 3
Does anyone have an idea of what this function might be?
If you are dealing with actual strings, sort them alphabetically ascending. If you are dealing with integers, there are some workarounds:
Convert the integer into bit-strings and sort alphabetically ascending.
or
Reverse the bits in the integer (011 becomes 110) and sort numerically ascending.
However, these workarounds might be slow. The function F described by you turns out to be pretty simple (assuming you are given the positions of the 1-bits) and is therefore a good solution.
To come up with an implementation of F we first look at the sequence of all bit-strings with exactly two 1-bits. Here we don't care about the length of the bit-string. We simply increment the bit-strings from left to right (opposed to the usual Arabic interpretation of numbers where you increment from right to left).
Next to the actual bit-string I replaced all 0 by ., the left 1 by l, and the right 1 by r. This makes it easier to see the pattern.
1: 11 lr
2: 101 l.r
3: 011 .lr
4: 1001 l..r
5: 0101 .l.r
6: 0011 ..lr
7: 10001 l...r
8: 01001 .l..r
9: 00101 ..l.r
10: 00011 ...lr
11: 100001 l....r
… … …
The function F is supposed to count the steps needed to increment to a given bit-string.
In the following, L is the index of the left 1-bit and R is the index of the right 1-bit. As in your question, we use 1-based indices. That is, the leftmost character in a string has index 1
For the right 1-bit to move one position to the right, the left 1-bit has to "catch up". If the left 1-bit starts at L=1 then catching up takes R-1 steps (when counting the start step L=1 too). For the right 1-bit to reach a high position, the left 1-bit has to catch up multiple times, as it is returned to the start each time the right 1-bit moves one to the right. Each time, catching up takes a little bit longer as the right 1-bit is further away from the start. The first time it takes 1 step, then 2, then 3, and so on. Thus, For the right 1-bit to reach position R it takes 1+2+3+…+(R-1) steps = (R-1)·(R-2)/2 steps. After that, we only have to move the left 1-bit to its position, which takes L steps. Therefore the function is
F(L,R) := (R-1)·(R-2) / 2 + L
Note that this function only is easy to implement if you know L and R. If you have an integer and would need to determine L and R first, it might be easier and faster to reverse the integer instead and sort numerically ascending. Determining L and R might be slower than reversing the bits in the integer.
Compare two numbers in the form of string lexicographically multiple times.
What I tried
the question is straight forward, to compare string after change but I am getting Time limit exceeded error because of multiple queries.
I was searching internet and came across segment tree to solve range queries. But alas I am not able to visualise, how it can help here.
Any hint is appreciated.
Segment tree seems overkill for this problem. When will B be lexicographically larger than A? When at index i, A[i] = 0, B [i] = 1 and A[0:i] = B [0:i]. Iterate over both strings at same time and keep in a set all indexes where they are different.
For each query at index i, update B to 1. Then check if B[i] = A[i]. IF they are equal, erase i from the indexes set. Otherwise, add it to the set. If there is no index left in the set, A and B are now equal => answer YES.
If there is at least 1 element, get the lowest index. If this index is j, that means A[0:j] = B[0:j] but A[j] != B[j]. So either A is 0 and B is 1 or A is 1 and B is 0. Depending on that, answer YES or NO.
This has complexity of O(Q log N), being Q the amount of queries
I came across a solution to the Coin Change problem here : Coin Change. Here I was able to understand the first recursive method, the second method which uses DP with a 2D array. But am not able to understand the logic behind the third solution.
As far as I have thought, the last method works for problems in which the sequence of coins used in coin change is considered. Am I correct? Can anyone please explain me if I am wrong.
Well I figured it out myself!
This can be easily proved using induction. Let table[k] denote the ways change can be given for a total of k. Now the algorithm consists of two loops, one which is controlled by i and iterates through the array containing all the different coins and the other is the j controlled loop which for a given i, updates all the values of elements in array table. Now consider for a fixed i we have calculated the number of ways change can be given for all values from 1 to n and these values are stored in table from table[1] to table[n]. When the i controlled loop iterates for i+1, the value in table[j] for an arbitrary j is incremented by table[j-S[i + 1]] which is nothing but the ways we can create j using at least one coin with value S[i + 1] (the array which stores coin values). Thus the total value in table[j] equals the number of ways we can create a change with coins of value S[1]....S[i] (this was already stored before) and the value table[j-S[i + 1]]. This is same as the optimal substructure of the problem used in the recursive algorithm.
int arr[size];
memset(arr,0,sizeof(size));
int n;
cin>>n;
int sum;
cin>>sum;
int a[size];
fi(i,n)
cin>>a[i];
arr[0]=1;
fi(i,n)
for(int j=arr[i]; j<=n; j++)
a[j]+=a[j-arr[i]];
cout<<arr[n];
The array arr is initialised as 0 so as to show that the number of ways a sum of ican be represented is zero(that is not initialised). However, the number of ways in which a sum of 0 can be represented is 1 (zero way).
Further, we take each coin and start initialising each position in the array starting from the coin denomination.
a[j]+=a[j-arr[i]] means that we are basically incrementing the possible ways to represent the sum jby the previous number of ways, required (j-arr[i]).
In the end, we output the a[n]
I'm finding it difficult to understand why/how the worst and average case for searching for a key in an array/list using binary search is O(log(n)).
log(1,000,000) is only 6. log(1,000,000,000) is only 9 - I get that, but I don't understand the explanation. If one did not test it, how do we know that the avg/worst case is actually log(n)?
I hope you guys understand what I'm trying to say. If not, please let me know and I'll try to explain it differently.
Worst case
Every time the binary search code makes a decision, it eliminates half of the remaining elements from consideration. So you're dividing the number of elements by 2 with each decision.
How many times can you divide by 2 before you are down to only a single element? If n is the starting number of elements and x is the number of times you divide by 2, we can write this as:
n / (2 * 2 * 2 * ... * 2) = 1 [the '2' is repeated x times]
or, equivalently,
n / 2^x = 1
or, equivalently,
n = 2^x
So log base 2 of n gives you x, which is the number of decisions being made.
Finally, you might ask, if I used log base 2, why is it also OK to write it as log base 10, as you have done? The base does not matter because the difference is only a constant factor which is "ignored" by Big O notation.
Average case
I see that you also asked about the average case. Consider:
There is only one element in the array that can be found on the first try.
There are only two elements that can be found on the second try. (Because after the first try, we chose either the right half or the left half.)
There are only four elements that can be found on the third try.
You can see the pattern: 1, 2, 4, 8, ... , n/2. To express the same pattern going in the other direction:
Half the elements take the maximum number of decisions to find.
A quarter of the elements take one fewer decision to find.
etc.
Since half of the elements take the maximum amount of time, it doesn't matter how much less time the other elements take. We could assume that all elements take the maximum amount of time, and even if half of them actually take 0 time, our assumption would not be more than double whatever the true average is. We can ignore "double" since it is a constant factor. So the average case is the same as the worst case, as far as Big O notation is concerned.
For binary search, the array should be arranged in ascending or descending order.
In each step, the algorithm compares the search key value with the key value of the middle element of the array.
If the keys match, then a matching element has been found and its index, or position, is returned.
Otherwise, if the search key is less than the middle element's key, then the algorithm repeats its action on the sub-array to the left of the middle element.
Or, if the search key is greater,then the algorithm repeats its action on the sub-array to the right.
If the remaining array to be searched is empty, then the key cannot be found in the array and a special "not found" indication is returned.
So, a binary search is a dichotomic divide and conquer search algorithm. Thereby it takes logarithmic time for performing the search operation as the elements are reduced by half in each of the iteration.
For sorted lists which we can do a binary search, each "decision" made by the binary search compares your key to the middle element, if greater it takes the right half of the list, if less it will take the left half of the list (if it's a match it will return the element at that position) you effectively reduce your list by half for every decision yielding O(logn).
Binary search however, only works for sorted lists. For un-sorted lists you can do a straight search starting with the first element yielding a complexity of O(n).
O(logn) < O(n)
Although it entirely depends on how many searches you'll be doing, your inputs, etc what your best approach would be.
For Binary search the prerequisite is a sorted array as input.
• As the list is sorted:
• Certainly we don't have to check every word in the dictionary to look up a word.
• A basic strategy is to repeatedly halve our search range until we find the value.
• For example, look for 5 in the list of 9 #s below.v = 1 1 3 5 8 10 18 33 42
• We would first start in the middle: 8
• Since 5<8, we know we can look at just the first half: 1 1 3 5
• Looking at the middle # again, narrow down to 3 5
• Then we stop when we're down to one #: 5
How many comparison is needed: 4 =log(base 2)(9-1)=O(log(base2)n)
int binary_search (vector<int> v, int val) {
int from = 0;
int to = v.size()-1;
int mid;
while (from <= to) {
mid = (from+to)/2;
if (val == v[mid])
return mid;
else if (val > v[mid])
from = mid+1;
else
to = mid-1;
}
return -1;
}
Given string s, find the shortest string t, such that, t^m=s.
Examples:
s="aabbb" => t="aabbb"
s="abab" => t = "ab"
How fast can it be done?
Of course naively, for every m divides |s|, I can try if substring(s,0,|s|/m)^m = s.
One can figure out the solution in O(d(|s|)n) time, where d(x) is the number of divisors of s. Can it be done more efficiently?
This is the problem of computing the period of a string. Knuth, Morris and Pratt's sequential string matching algorithm is a good place to get started. This is in a paper entitled "Fast Pattern Matching in Strings" from 1977.
If you want to get fancy with it, then check out the paper "Finding All Periods and Initial Palindromes of a String in Parallel" by Breslauer and Galil in 1991. From their abstract:
An optimal O(log log n) time CRCW-PRAM algorithm for computing all
periods of a string is presented. Previous parallel algorithms compute
the period only if it is shorter than half of the length of the
string. This algorithm can be used to find all initial palindromes of
a string in the same time and processor bounds. Both algorithms are
the fastest possible over a general alphabet. We derive a lower bound
for finding palindromes by a modification of a previously known lower
bound for finding the period of a string [3]. When p processors are
available the bounds become \Theta(d n p e + log log d1+p=ne 2p).
I really like this thing called the z-algorithm: http://www.utdallas.edu/~besp/demo/John2010/z-algorithm.htm
For every position it calculates the longest substring starting from there, that is also a prefix of the whole string. (in linear time of course).
a a b c a a b x a a a z
1 0 0 3 1 0 0 2 2 1 0
Given this "z-table" it is easy to find all strings that can be exponentiated to the whole thing. Just check for all positions if pos+z[pos] = n.
In our case:
a b a b
0 2 0
Here pos = 2 gives you 2+z[2] = 4 = n hence the shortest string you can use is the one of length 2.
This will even allow you to find cases where only a prefix of the exponentiated string matches, like:
a b c a
0 0 1
Here (abc)^2 can be cut down to your original string. But of course, if you don't want matches like this, just go over the divisors only.
Yes you can do it in O(|s|) time.
You can search for a "target" string of length n in a "source" string of length m in O(n+m) time. Build a solution based on that.
Let both source and target be s. An additional constraint is that 1 and any positions in the source that do not divide |s| are not valid starting positions for the match. Of course the search per se will always fail. But if there's a partial match and you have reached the end of the sourse string, then you have a solution to the original problem.
a modification to Boyer-Moore could possibly handle this in O(n) where n is length of s
http://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm