algorithm for finding all substrings from a specific alphabet in a string in O(m+n) time - string

Given a string S . Find all maximal substrings that contains chars from alphabet A in O(|S|+|A|) time. "Maixmal susbstring" is a substring of S, surrounded by chars that are not in alphabet A, or string boundaries.
example:
S = rerwmkwerewkekbvverqwewevbvrewqwmkwe
A = {w,r,e}
answer: rerw, werew, e, er, wewe, rew, w, we
Can you help?

Mapping your input to the output that you've provided here is one way to do it.
Just take the string characters one at a time and keep matching it to the alphabets in A.
Use a binary hash-table having 26 values based on alphabet.
Note: If capitals are included too hash them to their small letter counterparts for case-insensitivity and and double the hash table size for case-sensitivity.
If a value matches move on and concatenate this to previous sub-string.
If there is a miss, then break the sub-string, save it and start fresh with the next match.
Without the hash-table it would take O(m*n) time but now it'll take O(m) for hashing plus O(n) for traversing that is O(m+n) time.

Similar to what others have suggested, but in pseudocode form:
A = boolean array
for each c in the alphabet
set A[c] = true
L = stack of strings containing your solution
for each character c of S
if A contains c
append c to the top string of stack L
else
push empty string onto stack L
return L
Creating A will take O(n) and iteration through S will take O(m).

Related

Count number of wonderful substrings

I found below problem in one website.
A wonderful string is a string where at most one letter appears an odd number of times.
For example, "ccjjc" and "abab" are wonderful, but "ab" is not.
Given a string word that consists of the first ten lowercase English letters ('a' through 'j'), return the number of wonderful non-empty substrings in word. If the same substring appears multiple times in word, then count each occurrence separately.
A substring is a contiguous sequence of characters in a string.
Example 1 :
Input: word = "aba"
Output: 4
Explanation: The four wonderful substrings are a , b , a(last character) , aba.
I tried to solve it. I implemented a O(n^2) solution (n is input string length). But expected time complexity is O(n). I could not solve it in O(n). I found below solution but could not understood it. Can you please help me to understand below O(n) solution for this problem or come up with an O(n) solution?
My O(N^2) approach - for every substring check whether it has at most one odd count char. This check can be done in O(1) time using an 10 character array.
class Solution {
public:
long long wonderfulSubstrings(string str) {
long long ans=0;
int idx=0; long long xorsum=0;
unordered_map<long long,long long>mp;
mp[xorsum]++;
while(idx<str.length()){
xorsum=xorsum^(1<<(str[idx]-'a'));
// if xor is repeating it means it is having even ouccrences of all elements
// after the previos ouccerence of xor.
if(mp.find(xorsum)!=mp.end())
ans+=mp[xorsum];
mp[xorsum]++;
// if xor will have at most 1 odd character than check by xoring with (a to j)
// check correspondingly in the map
for(int i=0;i<10;i++){
long long temp=xorsum;
temp=temp^(1<<i);
if(mp.find(temp)!=mp.end())
ans+=mp[temp];
}
idx++;
}
return ans;
}
};
There's two main algorithmic tricks in the code, bitmasks and prefix-sums, which can be confusing if you've never seen them before. Let's look at how the problem is solved conceptually first.
For any substring of our string S, we want to count the number of appearances for each of the 10 possible letters, and ask if each number is even or odd.
For example, with a substring s = accjjc, we can summarize it as: odd# a, even# b, odd# c, even# d, even# e, even# f, even# g, even# h, even# i, even# j. This is kind of long, so we can summarize it using a bitmask: for each letter a-j, put a 1 if the count is odd, or 0 if the count is even. This gives us a 10-digit binary string, which is 1010000000 for our example.
You can treat this as a normal integer (or long long, depending on how ints are represented). When we see another character, the count flips whether it was even or odd. On bitmasks, this is the same as flipping a single bit, or an XOR operation. If we add another 'a', we can update the bitmask to start with 'even# a' by XORing it with the number 1000000000.
We want to count the number of substrings where at most one character count is odd. This is the same as counting the number of substrings whose bitmask has at most one 1. There are 11 of these bitmasks: the ten-zero string, and each string with exactly one 1 for each of the ten possible spots. If you interpret these as integers, the last ten strings are the first ten powers of 2: 1<<0, 1<<1, 1<<2, ... 1<<9.
Now, we want to count the bitmasks for all substrings in O(n) time. First, solve a simpler problem: count the bitmasks for just all prefixes, and store these counts in a hashmap. We can do this by keeping a running bitmask from the start, and performing updates by an XOR of the bit corresponding to that letter: xorsum=xorsum^(1<<(str[idx]-'a')). This can clearly be done in a single, O(n) time pass through the string.
How do we get counts of arbitrary substrings? The answer is prefix-sums: the count of letters in any substring can be expressed as a different of two prefix-counts. For example, with s = accjjc, suppose we want the bitmask corresponding to the substring 'jj'. This substring can be written as the difference of two prefixes: 'jj' = 'accjj' - 'acc'.
In the same way, we want to subtract the counts for the two prefix strings. However, we only have the bitmasks telling us whether each letter has an even or odd frequency. In the arithmetic of bitmasks, we treat each position mod 2, so coordinate-wise subtraction becomes XOR.
This means counts(jj) = counts(accjj) - counts(acc) becomes
bitmask(jj) = bitmask(accjj) ^ bitmask(acc).
There's still a problem: the algorithm I've described is still quadratic. If, at every prefix, we iterate over all previous prefix-bitmasks and check if our mask XOR the old mask is one of the 11 goal-bitmasks, we still have a quadratic runtime. Instead, you can use the fact that XOR is its own inverse: if a ^ b = c, then b = a ^ c. So, instead of doing XORs with old prefix masks, you XOR with the 11 goal masks and add the number of times we've seen that mask: ans+=mp[xorsum] counts the substrings ending at our current index whose bitmask is xorsum ^ 0000000000 = xorsum. The loop after that counts substrings whose bitmask is one of the ten goal bitmasks.
Lastly, you just have to add your current prefix-mask to update the counts: mp[xorsum]++.

How to efficiently find identical substrings of a specified length in a collection of strings?

I have a collection S, typically containing 10-50 long strings. For illustrative purposes, suppose the length of each string ranges between 1000 and 10000 characters.
I would like to find strings of specified length k (typically in the range of 5 to 20) that are substrings of every string in S. This can obviously be done using a naive approach - enumerating every k-length substring in S[0] and checking if they exist in every other element of S.
Are there more efficient ways of approaching the problem? As far as I can tell, there are some similarities between this and the longest common subsequence problem, but my understanding of LCS is limited and I'm not sure how it could be adapted to the situation where we bound the desired common substring length to k, or if subsequence techniques can be applied to finding substrings.
Here's one fairly simple algorithm, which should be reasonably fast.
Using a rolling hash as in the Rabin-Karp string search algorithm, construct a hash table H0 of all the |S0|-k+1 length k substrings of S0. That's roughly O(|S0|) since each hash is computed in O(1) from the previous hash, but it will take longer if there are collisions or duplicate substrings. Using a better hash will help you with collisions but if there are a lot of k-length duplicate substrings in S0 then you could end up using O(k|S0|).
Now use the same rolling hash on S1. This time, look each substring up in H0 and if you find it, remove it from H0 and insert it into a new table H1. Again, this should be around O(|S1|) unless you have some pathological case, like both S0 and S1 are just long repetitions of the same character. (It's also going to be suboptimal if S0 and S0 are the same string, or have lots of overlapping pieces.)
Repeat step 2 for each Si, each time creating a new hash table. (At the end of each iteration of step 2, you can delete the hash table from the previous step.)
At the end, the last hash table will contain all the common k-length substrings.
The total run time should be about O(Σ|Si|) but in the worst case it could be O(kΣ|Si|). Even so, with the problem size as described, it should run in acceptable time.
Some thoughts (N is number of strings, M is average length, K is needed substring size):
Approach 1:
Walk through all strings, computing rolling hash for k-length strings and storing these hashes in the map (store tuple {key: hash; string_num; position})
time O(NxM), space O(NxM)
Extract groups with equal hash, check step-by-step:
1) that size of group >= number of strings
2) all strings are represented in this group 3
3) thorough checking of real substrings for equality (sometimes hashes of distinct substrings might coincide)
Approach 2:
Build suffix array for every string
time O(N x MlogM) space O(N x M)
Find intersection of suffix arrays for the first string pair, using merge-like approach (suffixes are sorted), considering only part of suffixes of length k, then continue with the next string and so on
I would treat each long string as a collection of overlapped short strings, so ABCDEFGHI becomes ABCDE, BCDEF, CDEFG, DEFGH, EFGHI. You can represent each short string as a pair of indexes, one specifying the long string and one the starting offset in that string (if this strikes you as naive, skip to the end).
I would then sort each collection into ascending order.
Now you can find the short strings common to the first two collection by merging the sorted lists of indexes, keeping only those from the first collection which are also present in the second collection. Check the survivors of this against the third collection, and so on and the survivors at the end correspond to those short strings which are present in all long strings.
(Alternatively you could maintain a set of pointers into each sorted list and repeatedly look to see if every pointer points at short strings with the same text, then advancing the pointer which points at the smallest short string).
Time is O(n log n) for the initial sort, which dominates. In the worst case - e.g. when every string is AAAAAAAA..AA - there is a factor of k on top of this, because all string compares check all characters and take time k. Hopefully, there is a clever way round this with https://en.wikipedia.org/wiki/Suffix_array which allows you to sort in time O(n) rather than O(nk log n) and the https://en.wikipedia.org/wiki/LCP_array, which should allow you to skip some characters when comparing substrings from different suffix arrays.
Thinking about this again, I think the usual suffix array trick of concatenating all of the strings in question, separated by a character not found in any of them, works here. If you look at the LCP of the resulting suffix array you can split it into sections, splitting at points where where the difference between suffixes occurs less than k characters in. Now each offset in any particular section starts with the same k characters. Now look at the offsets in each section and check to see if there is at least one offset from every possible starting string. If so, this k-character sequence occurs in all starting strings, but not otherwise. (There are suffix array constructions which work with arbitrarily large alphabets so you can always expand your alphabet to produce a character not in any string, if necessary).
I would try a simple method using HashSets:
Build a HashSet for each long string in S with all its k-strings.
Sort the sets by number of elements.
Scan the first set.
Lookup the term in the other sets.
The first step takes care of repetitions in each long string.
The second ensures the minimum number of comparisons.
let getHashSet k (lstr:string) =
let strs = System.Collections.Generic.HashSet<string>()
for i in 0..lstr.Length - k do
strs.Add lstr.[i..i + k - 1] |> ignore
strs
let getCommons k lstrs =
let strss = lstrs |> Seq.map (getHashSet k) |> Seq.sortBy (fun strs -> strs.Count)
match strss |> Seq.tryHead with
| None -> [||]
| Some h ->
let rest = Seq.tail strss |> Seq.toArray
[| for s in h do
if rest |> Array.forall (fun strs -> strs.Contains s) then yield s
|]
Test:
let random = System.Random System.DateTime.Now.Millisecond
let generateString n =
[| for i in 1..n do
yield random.Next 20 |> (+) 65 |> System.Convert.ToByte
|] |> System.Text.Encoding.ASCII.GetString
[ for i in 1..3 do yield generateString 10000 ]
|> getCommons 4
|> fun l -> printfn "found %d\n %A" l.Length l
result:
found 40
[|"PPTD"; "KLNN"; "FTSR"; "CNBM"; "SSHG"; "SHGO"; "LEHS"; "BBPD"; "LKQP"; "PFPH";
"AMMS"; "BEPC"; "HIPL"; "PGBJ"; "DDMJ"; "MQNO"; "SOBJ"; "GLAG"; "GBOC"; "NSDI";
"JDDL"; "OOJO"; "NETT"; "TAQN"; "DHME"; "AHDR"; "QHTS"; "TRQO"; "DHPM"; "HIMD";
"NHGH"; "EARK"; "ELNF"; "ADKE"; "DQCC"; "GKJA"; "ASME"; "KFGM"; "AMKE"; "JJLJ"|]
Here it is in fiddle: https://dotnetfiddle.net/ZK8DCT

Shortest uncommon prefix from a set of strings

Given a string A and a set of string S. Need to find an optimum method to find a prefix of A which is not a prefix of any of the strings in S.
Example
A={apple}
S={april,apprehend,apprehension}
Output should be "appl" and not "app" since "app" is prefix of both "apple" and "apprehension" but "appl" is not.
I know the trie approach; by making a trie of set S and then traversing in the trie for string A.
But what I want to ask is can we do it without trie?
Like can we compare every pair (A,Si), Si = ith string from set S and get the largest common prefix out of them.In this case that would be "app" , so now the required ans would be "appl".
This would take 2 loops(one for iterating through S and another for comparing Si and A).
Can we improve upon this??
Please suggest an optimum approach.
I'm not sure exactly what you had in mind, but here's one way to do it:
Keep a variable longest, initialised to 0.
Loop over all elements S[i] of S,
setting longest = max(longest, matchingPrefixLength(S[i], A)).
Return the prefix from A of length longest+1.
This uses O(1) space and takes O(length(S)*average length of S[i]) time.
This is optimal (at least for the worst case) since you can't get around needing to look at every character of every element in S.
Example:
A={apple}
S={april,apprehend,apprehension}
longest = 0
The longest prefix for S[0] and A is 2
So longest = max(0,2) = 2
The longest prefix for S[1] and A is 3
So longest = max(2,3) = 3
The longest prefix for S[2] and A is 3
So longest = max(3,3) = 3
Now we return the prefix of length longest+1 = 4, i.e. "appl"
Note that there are actually 2 trie-based approaches:
Store only A in the trie. Iterate through the trie for each element from S to eliminate prefixes.
This uses much less memory than the second approach (but still more than the approach above). At least assuming A isn't much, much longer than S[i], but you can optimise to stop at the longest element in S or construct the tree as we go to avoid this case.
Store all elements from S in the trie. Iterate through the trie with A to find the shortest non-matching prefix.
This approach is significantly faster if you have lots of A's that you want to query for a constant set S (since you only have to set up the trie once, and do a single lookup for each A, where-as you have to create a new trie and run through each S[i] for each A for the first approach).
What is your input size?
Let's model your input as being of N+1 strings whose lengths are about M characters. Your total input size is about M(N+1) character, plus some proportional amount of apparatus to encode that data in a usable format (data structure overhead).
Your algorithm ...
maxlen = 0
for i = 1 to N
for j = 1 to M
if A[j] = S[i][j] then
if j > maxlen then maxlen = j
break
print A[1...maxlen]
... performs up M x N iterations of the innermost loop, reading two characters each time, for a total of 2MN characters read.
Recall our input data size was about M(N+1) also. So our question now is whether we can solve this problem, in the worst case, looking at asymptotically less than the total input (you do a little less than looking at all the input twice, or linear in the input size). The answer is no. Consider this worst case:
length of A is M'
length of all strings in S is M'
A differs from N-1 strings in S by the last two characters
A differs from 1 string in S by only the last character
Any algorithm must look at M'-1 characters of N-1 strings, plus M' characters of 1 string, to correctly determine the answer of this problem instance is A.
(M'-1)(N'-1) + N = M'N - M' - N + 1 + N = M'N - M' + 1
For N >= 2, the dominant terms in both M'(N+1) and M'N' are both M'N, meaning that for N >= 2, both the input size and the amount of that input any correct algorithm must read is O(MN). Your algorithm is O(MN). Any other algorithm cannot be asymptotically better.

Find the minimal lexographical string formed by merging two strings

Suppose we are given two strings s1 and s2(both lowercase). We have two find the minimal lexographic string that can be formed by merging two strings.
At the beginning , it looks prettty simple as merge of the mergesort algorithm. But let us see what can go wrong.
s1: zyy
s2: zy
Now if we perform merge on these two we must decide which z to pick as they are equal, clearly if we pick z of s2 first then the string formed will be:
zyzyy
If we pick z of s1 first, the string formed will be:
zyyzy which is correct.
As we can see the merge of mergesort can lead to wrong answer.
Here's another example:
s1:zyy
s2:zyb
Now the correct answer will be zybzyy which will be got only if pick z of s2 first.
There are plenty of other cases in which the simple merge will fail. My question is Is there any standard algorithm out there used to perform merge for such output.
You could use dynamic programming. In f[x][y] store the minimal lexicographical string such that you've taken x charecters from the first string s1 and y characters from the second s2. You can calculate f in bottom-top manner using the update:
f[x][y] = min(f[x-1][y] + s1[x], f[x][y-1] + s2[y]) \\ the '+' here represents
\\ the concatenation of a
\\ string and a character
You start with f[0][0] = "" (empty string).
For efficiency you can store the strings in f as references. That is, you can store in f the objects
class StringRef {
StringRef prev;
char c;
}
To extract what string you have at certain f[x][y] you just follow the references. To udapate you point back to either f[x-1][y] or f[x][y-1] depending on what your update step says.
It seems that the solution can be almost the same as you described (the "mergesort"-like approach), except that with special handling of equality. So long as the first characters of both strings are equal, you look ahead at the second character, 3rd, etc. If the end is reached for some string, consider the first character of the other string as the next character in the string for which the end is reached, etc. for the 2nd character, etc. If the ends for both strings are reached, then it doesn't matter from which string to take the first character. Note that this algorithm is O(N) because after a look-ahead on equal prefixes you know the whole look-ahead sequence (i.e. string prefix) to include, not just one first character.
EDIT: you look ahead so long as the current i-th characters from both strings are equal and alphabetically not larger than the first character in the current prefix.

algorithms for fast string approximate matching

Given a source string s and n equal length strings, I need to find a quick algorithm to return those strings that have at most k characters that are different from the source string s at each corresponding position.
What is a fast algorithm to do so?
PS: I have to claim that this is a academic question. I want to find the most efficient algorithm if possible.
Also I missed one very important piece of information. The n equal length strings form a dictionary, against which many source strings s will be queried upon. There seems to be some sort of preprocessing step to make it more efficient.
My gut instinct is just to iterate over each String n, maintaining a counter of how many characters are different than s, but I'm not claiming it is the most efficient solution. However it would be O(n) so unless this is a known performance problem, or an academic question, I'd go with that.
Sedgewick in his book "Algorithms" writes that Ternary Search Tree allows "to locate all words within a given Hamming distance of a query word". Article in Dr. Dobb's
Given that the strings are fixed length, you can compute the Hamming distance between two strings to determine the similarity; this is O(n) on the length of the string. So, worst case is that your algorithm is O(nm) for comparing your string against m words.
As an alternative, a fast solution that's also a memory hog is to preprocess your dictionary into a map; keys are a tuple (p, c) where p is the position in the string and c is the character in the string at that position, values are the strings that have characters at that position (so "the" will be in the map at {(0, 't'), "the"}, {(1, 'h'), "the"}, {(2, 'e'), "the"}). To query the map, iterate through query string's characters and construct a result map with the retrieved strings; keys are strings, values are the number of times the strings have been retrieved from the primary map (so with the query string "the", the key "thx" will have a value of 2, and the key "tee" will have a value of 1). Finally, iterate through the result map and discard strings whose values are less than K.
You can save memory by discarding keys that can't possibly equal K when the result map has been completed. For example, if K is 5 and N is 8, then when you've reached the 4th-8th characters of the query string you can discard any retrieved strings that aren't already in the result map since they can't possibly have 5 matching characters. Or, when you've finished with the 6th character of the query string, you can iterate through the result map and remove all keys whose values are less than 3.
If need be you can offload the primary precomputed map to a NoSql key-value database or something along those lines in order to save on main memory (and also so that you don't have to precompute the dictionary every time the program restarts).
Rather than storing a tuple (p, c) as the key in the primary map, you can instead concatenate the position and character into a string (so (5, 't') becomes "5t", and (12, 'x') becomes "12x").
Without knowing where in each input string the match characters will be, for a particular string, you might need to check every character no matter what order you check them in. Therefore it makes sense to just iterate over each string character-by-character and keep a sum of the total number of mismatches. If i is the number of mismatches so far, return false when i == k and true when there are fewer than k-i unchecked characters remaining in the string.
Note that depending on how long the strings are and how many mismatches you'll allow, it might be faster to iterate over the whole string rather than performing these checks, or perhaps to perform them only after every couple characters. Play around with it to see how you get the fastest performance.
My method if we're thinking out loud :P I can't see a way to do this without going through each n string, but I'm happy to be corrected. On that it would begin with a pre-process to save a second set of your n strings so that the characters are in ascending order.
The first part of the comparison would then be to check each n string a character at a time say n' to each character in s say s'.
If s' is less than n' then not equal and move to the next s'. If n' is less than s' then go to next n'. Otherwise record a matching character. Repeat this until k miss matches are found or the alternate matches are found and mark n accordingly.
For further consideration, an added pre-processing could be done on each adjacent string in n to see the total number of characters that differ. This could then be used when comparing strings n to s and if sufficient difference exist between these and the adjacent n there may not be a need to compare it?

Resources