Given k words, determine words equality in constant time - string

I have encountered this question while studying for algorithms test:
Given a set of k words (strings), with a total character count of n, (meaning the sum of all words lengths are n), perform some sort of manipulation on the words in O(n) time, such that whenever 2 words are being compared, return answer (whether they are identical or not) in O(1) time.
It's an interesting question but I could not find any direction to deal with it...

Construct a trie of all of the words, and for each word store the index of its last character in the array. This is a O(n) operation.
Given two words, they are the same if and only if the index of their last character is the same.

Related

Find lexicographically smallest string with given hash value [Competitive Coding]

I encountered the following problem for which I couldn't quite find the appropriate solution.
The problem says for a given string having a specific hash value, find the lowest string (which is not the same as the given one) of the
same length and same hash value (if one exists). E.g. For the
following value mapping of alphabets: {a:0, b:1, c:2,...,z:25}
If the given string is: ady with hash value - 27. The
lexicographically smallest one (from all possible ones excluding the
given one) would be: acz
Solution approach I could think of:
I reduced the problem to Coin-Change problem and resorted to finding all possible combinations for the given sum. Out of all the obtained solutions, I sort them up and find the lowest (or the next smallest if the given string is smallest).
The problem however lies with finding all possible solutions (even in a DP approach) which might be inefficient for larger inputs.
My doubt is:
What solution strategy (possibly even Greedy) could give a better time complexity than above?
I cannot guarantee that this will give you a lower complexity, but a couple of things:1 you don't need to check all the space, just the space of lexicographic value less than or equal to the given string. 2: you can formulate it as an integer programming problem:
Assuming your character space is the letters, and each letter is given its number index[0-25] so a corresponds to 0, b to 1 and so forth. let x_i be the number of letters in your string corresponding to index i. You can formulate your problem as:
min sum_i(wi*xi)
st xi*ai = M
xi>=0,
sum_i(xi)=n
sum_i(wi*xi)<= N
xi integer
Where wi= 26^i, ai is equal to hash(letter(i)), n is the number of letters of the original string, N is the hash value of the original string. This is an integer programming problem so you can try plugging it to a solver. The original problem is very similar to subset sum problem with fixed subset size (where the hash values are the elements you are summing over, and the subset size is the length of the string) so you might also want to take a look at that, although as you will see from the answer it is a complicated problem.

special interleaving string coding

The interleaving rule is to form a new word by inserting one word into another, in a letter by letter fashion, like showing below:
a p p l e
o l d
=
aoplpdle
It does not matter which word goes first. (oalpdple is also valid)
The problem is given a vector of strings {old, apple, talk, aoplpdle, otladlk}, find all the words that are valid interleavings of two word from the vector.
The simplest solution asks for at least O(n^2) time complexity, taking every two word and form a interleaving word, check if it is in the vector.
Is there better solutions?
Sort by length. You only need to check combinations where the sum of lengths of 2 entries (words...) is equal to the length of existing entry(ies).
This will reduce your average complexity. I didn't take the time to compute the worst complexity, but it's probably lower then O(n^2) as well.
You can also optimize the "inner loop" by rejecting matches early - you don't really need to construct the entire interleaved word to reject a match - iterate the candidate word alongside the 2 input words till you find a mismatch. This won't reduce your worst complexity, but will have a positive effect on overall performance.

longest common substring for 2/3 strings : suffix array vs dynamic programming approach

If I want to find the longest common substring for 2 strings then which approach will be more efficient in terms of time/space complexity: using suffix arrays of DP?
DP will incur O(m*n) space with O(m*n) time complexity, what will be the time complexity of the suffix array approach?
1) Calculate the suffixes O(m) + O(n)
2) Sort them O(m+n log2(m+n))
3) Finding longest common prefix for m+n-1 strings? [I'm not sure how to calculate #of comparisons]
Suffix arrays allow us to do many more things with the sub-strings (like search for sub-string etc.), but since in this case rest of the functions are not needed, will DP be considered an easier/cleaner approach?Which one should be used in the case where we are comparing 2 strings?
Also, what if we have more than 2 strings?
Suffix array would be better. The LCS(longest common substring for n strings) problem can be solve as below:
Concatenate S1, S2, ..., Sn as follows:
S = S1$1S2$2...$nSn, Here $i are special symbols (sentinels) that are different and
lexicographically less than other symbols of the initial alphabet.
Compute the suffix array. Generally, We implemented suffix array in O(n*log n) but there is an important algorithm called DC3 which computes suffix arrays in O(n), n is the total length of N strings. You can google this algorithm.
Compute the LCP of all adjacent suffixes.

KMP algorithm for multiple occurrences

Is it possible to still perform a O(n) time complexity to search multiple occurrences of Knuth–Morris–Pratt algorithm?
Suppose we have a string S[0,...,N]. Recall that the ith entry in the prefix array stores the length of the maximal prefix of S[0,...,i] that matches the suffix.
We can calculate the prefix array P for pattern$subject (assuming that $ doesn't occur in subject). It remains to find indices such that P[i]==length(pattern), which can be done in linear time.

Count no. of words in O(n)

I am on an interview ride here. One more interview question I had difficulties with.
“A rose is a rose is a rose” Write an
algorithm that prints the number of
times a character/word occurs. E.g.
A – 3 Rose – 3 Is – 2
Also ensure that when you are printing
the results, they are in order of
what was present in the original
sentence. All this in order n.
I did get solution to count number of occurrences of each word in sentence in the order as present in the original sentence. I used Dictionary<string,int> to do it. However I did not understand what is meant by order of n. That is something I need an explanation from you guys.
There are 26 characters, So you can use counting sort to sort them, in your counting sort you can have an index which determines when specific character visited first time to save order of occurrence. [They can be sorted by their count and their occurrence with sort like radix sort].
Edit: by words first thing every one can think about it, is using Hash table and insert words in hash, and in this way count them, and They can be sorted in O(n), because all numbers are within 1..n steel you can sort them by counting sort in O(n), also for their occurrence you can traverse string and change position of same values.
Order of n means you traverse the string only once or some lesser multiple of n ,where n is number of characters in the string.
So your solution to store the String and number of its occurences is O(n) , order of n, as you loop through the complete string only once.
However it uses extra space in form of the list you created.
Order N refers to the Big O computational complexity analysis where you get a good upper bound on algorithms. It is a theory we cover early in a Data Structures class, so we can torment, I mean help the student gain facility with it as we traverse in a balanced way, heaps of different trees of knowledge, all different. In your case they want your algorithm to grow in compute time proportional to the size of the text as it grows.
It's a reference to Big O notation. Basically the interviewer means that you have to complete the task with an O(N) algorithm.
"Order n" is referring to Big O notation. Big O is a way for mathematicians and computer scientists to describe the behavior of a function. When someone specifies searching a string "in order n", that means that the time it takes for the function to execute grows linearly as the length of that string increases. In other words, if you plotted time of execution vs length of input, you would see a straight line.
Saying that your function must be of Order n does not mean that your function must equal O(n), a function with a Big O less than O(n) would also be considered acceptable. In your problems case, this would not be possible (because in order to count a letter, you must "touch" that letter, thus there must be some operation dependent on the input size).
One possible method is to traverse the string linearly. Then create a hash and list. The idea is to use the word as the hash key and increment the value for each occurance. If the value is non-existent in the hash, add the word to the end of the list. After traversing the string, go through the list in order using the hash values as the count.
The order of the algorithm is O(n). The hash lookup and list add operations are O(1) (or very close to it).

Resources