Count no. of words in O(n) - string

I am on an interview ride here. One more interview question I had difficulties with.
“A rose is a rose is a rose” Write an
algorithm that prints the number of
times a character/word occurs. E.g.
A – 3 Rose – 3 Is – 2
Also ensure that when you are printing
the results, they are in order of
what was present in the original
sentence. All this in order n.
I did get solution to count number of occurrences of each word in sentence in the order as present in the original sentence. I used Dictionary<string,int> to do it. However I did not understand what is meant by order of n. That is something I need an explanation from you guys.

There are 26 characters, So you can use counting sort to sort them, in your counting sort you can have an index which determines when specific character visited first time to save order of occurrence. [They can be sorted by their count and their occurrence with sort like radix sort].
Edit: by words first thing every one can think about it, is using Hash table and insert words in hash, and in this way count them, and They can be sorted in O(n), because all numbers are within 1..n steel you can sort them by counting sort in O(n), also for their occurrence you can traverse string and change position of same values.

Order of n means you traverse the string only once or some lesser multiple of n ,where n is number of characters in the string.
So your solution to store the String and number of its occurences is O(n) , order of n, as you loop through the complete string only once.
However it uses extra space in form of the list you created.

Order N refers to the Big O computational complexity analysis where you get a good upper bound on algorithms. It is a theory we cover early in a Data Structures class, so we can torment, I mean help the student gain facility with it as we traverse in a balanced way, heaps of different trees of knowledge, all different. In your case they want your algorithm to grow in compute time proportional to the size of the text as it grows.

It's a reference to Big O notation. Basically the interviewer means that you have to complete the task with an O(N) algorithm.

"Order n" is referring to Big O notation. Big O is a way for mathematicians and computer scientists to describe the behavior of a function. When someone specifies searching a string "in order n", that means that the time it takes for the function to execute grows linearly as the length of that string increases. In other words, if you plotted time of execution vs length of input, you would see a straight line.
Saying that your function must be of Order n does not mean that your function must equal O(n), a function with a Big O less than O(n) would also be considered acceptable. In your problems case, this would not be possible (because in order to count a letter, you must "touch" that letter, thus there must be some operation dependent on the input size).

One possible method is to traverse the string linearly. Then create a hash and list. The idea is to use the word as the hash key and increment the value for each occurance. If the value is non-existent in the hash, add the word to the end of the list. After traversing the string, go through the list in order using the hash values as the count.
The order of the algorithm is O(n). The hash lookup and list add operations are O(1) (or very close to it).

Related

Find lexicographically smallest string with given hash value [Competitive Coding]

I encountered the following problem for which I couldn't quite find the appropriate solution.
The problem says for a given string having a specific hash value, find the lowest string (which is not the same as the given one) of the
same length and same hash value (if one exists). E.g. For the
following value mapping of alphabets: {a:0, b:1, c:2,...,z:25}
If the given string is: ady with hash value - 27. The
lexicographically smallest one (from all possible ones excluding the
given one) would be: acz
Solution approach I could think of:
I reduced the problem to Coin-Change problem and resorted to finding all possible combinations for the given sum. Out of all the obtained solutions, I sort them up and find the lowest (or the next smallest if the given string is smallest).
The problem however lies with finding all possible solutions (even in a DP approach) which might be inefficient for larger inputs.
My doubt is:
What solution strategy (possibly even Greedy) could give a better time complexity than above?
I cannot guarantee that this will give you a lower complexity, but a couple of things:1 you don't need to check all the space, just the space of lexicographic value less than or equal to the given string. 2: you can formulate it as an integer programming problem:
Assuming your character space is the letters, and each letter is given its number index[0-25] so a corresponds to 0, b to 1 and so forth. let x_i be the number of letters in your string corresponding to index i. You can formulate your problem as:
min sum_i(wi*xi)
st xi*ai = M
xi>=0,
sum_i(xi)=n
sum_i(wi*xi)<= N
xi integer
Where wi= 26^i, ai is equal to hash(letter(i)), n is the number of letters of the original string, N is the hash value of the original string. This is an integer programming problem so you can try plugging it to a solver. The original problem is very similar to subset sum problem with fixed subset size (where the hash values are the elements you are summing over, and the subset size is the length of the string) so you might also want to take a look at that, although as you will see from the answer it is a complicated problem.

Given k words, determine words equality in constant time

I have encountered this question while studying for algorithms test:
Given a set of k words (strings), with a total character count of n, (meaning the sum of all words lengths are n), perform some sort of manipulation on the words in O(n) time, such that whenever 2 words are being compared, return answer (whether they are identical or not) in O(1) time.
It's an interesting question but I could not find any direction to deal with it...
Construct a trie of all of the words, and for each word store the index of its last character in the array. This is a O(n) operation.
Given two words, they are the same if and only if the index of their last character is the same.

special interleaving string coding

The interleaving rule is to form a new word by inserting one word into another, in a letter by letter fashion, like showing below:
a p p l e
o l d
=
aoplpdle
It does not matter which word goes first. (oalpdple is also valid)
The problem is given a vector of strings {old, apple, talk, aoplpdle, otladlk}, find all the words that are valid interleavings of two word from the vector.
The simplest solution asks for at least O(n^2) time complexity, taking every two word and form a interleaving word, check if it is in the vector.
Is there better solutions?
Sort by length. You only need to check combinations where the sum of lengths of 2 entries (words...) is equal to the length of existing entry(ies).
This will reduce your average complexity. I didn't take the time to compute the worst complexity, but it's probably lower then O(n^2) as well.
You can also optimize the "inner loop" by rejecting matches early - you don't really need to construct the entire interleaved word to reject a match - iterate the candidate word alongside the 2 input words till you find a mismatch. This won't reduce your worst complexity, but will have a positive effect on overall performance.

Finding similar strings in large datasets

I'm using levenshtein distance to retrieve similar strings from a list. At the moment the list has just a few thousand items, but we'll need to support at least 100k items.
I'm trying to make this more efficient and one technique I came up with was to calculate the levenshtein distance only on strings that are of similar length. I though about also filtering on the initial character i.e. if the string to search starts with b then I'll run the calculation only on the strings that start with b. But I'm not sure if I could assume this to work all the time.
I was wondering if you all have a better way of getting this done?
Thanks
One way to go would be to hope that a match with small edit distance would have within it a short exact match. If you assume this, then, given the string ABCDEF, retrieve all strings containing ABC, BCD, CDE, or DEF, and compute their edit distances. You may even find that the best match among these is so close that any closer match must have a short match inside it, so you would have found it already. You would have to accept that if you are unlucky you may miss some good matches, or be forced to go through all the possibilities one by one.
As an alternative to building a database of substrings, you could build a http://en.wikipedia.org/wiki/Suffix_array and LCP array from a string obtained by concatenating all the stored strings, separating them with a marker character not otherwise used. This takes time and space linear in the input size. You would then search for exact matches by looking for strings in the suffix array starting ABCDEF, BCDEF, CDEF, and DEF.

sequential search

For sequential search, what is the average number of comparisons needed to find a record in a file?
A sequential search starts from the beginning of the file and checks each element one-by-one until the desired element is found. Assuming that the record you are searching for exists in the file exactly once and could be anywhere in the file with equal probability, the average number of comparisons is equal to half the number of records in the file.
However if the record does not exist in the file, you will have to examine every single record in the file before discovering this.
For a list with n items, the best case is when the value is equal to the first element of the list, in which case only one comparison is needed. The worst case is when the value is not in the list (or occurs only once at the end of the list), in which case n comparisons are needed.
Asymptotically, therefore, the worst-case cost and the expected cost of linear search are both O(n)
I would like to add few points that the previous answers fails to point out:
On the other hand, we must consider whether the file is available on one device or spread over multiple devices. In case of T rams, then the complexity will be O(T*N/(1+log(T))).
In general, sequential search takes O(N) time complexity.
When combined with data structures such as R-Tree, it can give a best case time complexity of O(N/(log(log(N)))) in the case of records in a file.
It depends on the structure/ format of the file such that if the data fields are available in a hash map, sequential search is a backlog.

Resources