find frequency of every word - string

There is a question asked to me in the interview, but I am not able to answer that.
Question is :
You are given a directed graph in which every node is a character and you are also given a array of strings.
The task is to calculate the frequency of every string in the array by searching in the graph.
My approach : I used trie, Suffix tree, but the interviewer is not fully satisfied. Can you give me an algorithm for the given problem.

How about the following... To find the number of occurrences of a String, s, in a directed graph.
Start with a bread first search (marking already visited nodes to avoid cycles)
When the first character is found, switch to a depth first search with max-depth = length(s)
If the string sequence is detected, increment occurrence count for each occurence of the DFS
Resume the BFS
Some caveats
I do not believe the DFS should share the BFS's visited node list (you may need to go back to the beginning and overlap for example
The BFS should also not shared the DFS visited list. For example, you could be looking for "Alan" and have "AAlan" and make sure you re-start on the second A
Now for an array, I can just repeat this procedure for each string.. Sure there may be more efficient solution, but I'd start off thinking about it this way..
Did your answer include any conversation about a breadth-first or depth-first search? If someone mentioned searching a graph, I'd almost always reply with a variation of one of these

Here's another solution:
First we need to do some preprocessing on the string array.
Let's define C as the subset of all the characters composing all the strings in the array.
For each character in C, we are going to keep track of each string containing that character and its position in that string + a Boolean value stating if its the last char in that string. This can be done using a dictionary.
For example, let's say our array is ['one', 'two', 'three']. Our dictionary would look something like this:
'o': (0, 0, false),(1,2,true)
't': (1, 0, false),(2,0,false)
'n': (0, 1, false)
'e': (2, 3, false),(2,4, true)
'h': (2, 1, false)
'r': (2, 2, false)
'w': (2, 1, false)
Next we are going to use DFS and Dynamic Programming.
Basically, whenever you visit an edge, you check the parent and the child on the dict to see if they compose a substring and you store that information.
Using this method, you can easily detect all recurrence of every string in the array.
Building the preprocessing table can be done in o(L) where L is the sum of the lengths of all the strings in the array.
Discovering all recurrence can be done in O(m * k) where m is the number of edges (and not the number of nodes, as a node can be discovered multiple times) and k is the number of strings.
The implementation can be a little tricky and there are some pitfalls you should avoid.

see this graph, each level has all 4*4 edges(hard to draw, plz stand me)
there may be a lot of occurrences.
i think he may be expecting dynamic programming:
process each string individually, f[i][j] denotes the total numbers to accomplish the string's last j letters starting from node i, the rest would be easy.

Related

How to access the count value of a Counter object in Python3?

Scenario
Given a few lines of code, I have included the line
counts = Counter(rank for rank in ranks)
because I want to find the highest count of a character in a string.
So I end up with the following object:
Counter({'A': 4, 'K': 1})
Here, the value I'm looking for is 4, because it is the highest count. Assume the object is called counts, then max(counts) returns 'K', presumably because 'K' > 'A' in unicode.
Question
How can I access the largest count/value, rather than the "largest" key?
You can use max as suggested by others. Note, though, that the Counter class provides the most_common(k) method which is slightly more flexible:
counts.most_common(1)[0][1]
Its real performance benefits, however, will only be seen if you want more than 1 most common element.
Maybe
max(counts.values())
would work?
From the Python documentation:
A Counter is a dict subclass for counting hashable objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values.
So you should treat the counter as a dictionary. To take the biggest value, use max() on the counters .value().

how would i look for the shortest unique subsequence from a set of words in python?

If i have a set of similar words such as:
\bigoplus
\bigotimes
\bigskip
\bigsqcup
\biguplus
\bigvee
\bigwedge
...
\zebra
\zeta
i would like to find the shortest unique set of letters that would characterize each word uniquely
i.e.
\bigop:
\bigoplus
\bigot:
\bigotimes
\bigsk:
\bigskip
EDIT: notice the unique sequence identifier always starts from the begining of the word. I writting an app that gives snippet suggestions when typing. So in general users will start typing from the start of the word
and so on, the sequence needs only be as long as is enough to characterize a word uniquely.
EDIT: but needs to start from the begining of the word.
The characterization always begins from the beginning of the word.
My thoughts:
i was thinking of sorting the words, and grouping based on the fist alphabetical letter, then probably use a longest common subsequence algorithm to find the longest subsequence in common, take its length and use length+1 chars for that unique substring, but im stuck since the algorithms i know for longest subsequence will usually only take two parameters at a time, and i may have more than two words in each group starting with a particular alphabetical letter.
Im i solving an already solved probelem? google was no help.
I'm assuming you want to find the prefixes that uniquely identify the strings, because if you could pick any subsequence, then for example om would be enough to identify \bigotimes in your example.
You can make use of the fact that for a given word, the word with the longest common prefix will be adjacent to it in lexicographical order.
Since your dictionary seems to be sorted already, you can figure out the solution for every word by finding the longest prefix that disambiguates it from both its neighbors.
Example:
>>> lst = r"""
... \bigoplus
... \bigotimes
... \bigskip
... \bigsqcup
... \biguplus
... \bigvee
... \bigwedge
... """.split()
>>> lst.sort() # necessary if lst is not already sorted
>>> lst = [""] + lst + [""]
>>> def cp(x): return len(os.path.commonprefix(x))
...
>>> { lst[i]: 1 + max(cp(lst[i-1:i+1]), cp(lst[i:i+2])) for i in range(1,len(lst)-1) }
{'\\bigvee': 5,
'\\bigsqcup': 6,
'\\biguplus': 5,
'\\bigwedge': 5,
'\\bigotimes': 6,
'\\bigoplus': 6,
'\\bigskip': 6}
The numbers indicate how long the minimal uniquely identifying prefix of a word is.
Thought I'd dump this here since it was the most similar to a question I was about to ask:
Looking for a better solution (will report back when I find one) to iterating through a sequence of strings, trying to map the shortest unique string for/to each.
For example, in a sequence of:
['blue', 'black', 'bold']
# 'blu' --> 'blue'
# 'bla' --> 'black'
# 'bo' --> 'bold'
Looking to improve upon my first, feeble solution. Here's what I came up with:
# Note: Iterating through the keys in a dict, mapping shortest
# unique string to the original string.
shortest_unique_strings = {}
for k in mydict:
for ix in range(len(k)):
# When the list-comp only has one item.
# 'key[:ix+1]' == the current substring
if len([key for key in mydict if key.startswith(key[:ix+1])]) == 1:
shortest_unique_strings[key[:ix+1]] = k
break
Note: On improving efficiency: we should be able to remove those keys/strings that have already been found, so that successive searches don't have to repeat on those items.
Note: I specifically refrained from creating/using any functions outside of built-ins.

Finding length of substring

I have given n strings . I have to find a string S so that, given n strings are sub-sequence of S.
For example, I have given the following 5 strings:
AATT
CGTT
CAGT
ACGT
ATGC
Then the string is "ACAGTGCT" . . Because, ACAGTGCT contains all given strings as super-sequence.
To solve this problem I have to know the algorithm . But I have no idea how to solve this . Guys, can you help me by telling technique to solve this problem ?
This is a NP-complete problem called multiple sequence alignment.
The wiki page describes solution methods such as dynamic programming which will work for small n, but becomes prohibitively expensive for larger n.
The basic idea is to construct an array f[a,b,c,...] representing the length of the shortest string S that generates "a" characters of the first string, "b" characters of the second, and "c" characters of the third.
My Approach: using Trie
Building a Trie from the given words.
create empty string (S)
create empty string (prev)
for each layer in the trie
create empty string (curr)
for each character used in the current layer
if the character not used in the previous layer (not in prev)
add the character to S
add the character to curr
prev = curr
Hope this helps :)
1 Definitions
A sequence of length n is a concatenation of n symbols taken from an alphabet .
If S is a sequence of length n and T is a sequence of length m and n m then S is a subsequence of T if S can be obtained by deleting m-n symbols from T. The symbols need not be contiguous.
A sequence T of length m is a supersequence of S of length n if T can be obtained by inserting m-n symbols. That is, T is a supersequence of S if and only if S is a subsequence of T.
A sequence T is a common supersequence of the sequences S1 and S2 of T is a supersequence of both S1 and S2.
2 The problem
The problem is to find a shortest common supersequence (SCS), which is a common supersequence of minimal length. There could be more than one SCS for a given problem.
2.1 Example
S= {a, b, c}
S1 = bcb
S2 = baab
S3 = babc
One shortest common supersequence is babcab (babacb, baabcb, bcaabc, bacabc, baacbc).
3 Techniques
Dynamic programming Requires too much memory unless the number of input-sequences are very small.
Branch and bound Requires too much time unless the alphabet is very small.
Majority merge The best known heuristic when the number of sequences is large compared to the alphabet size. [1]
Greedy (take two sequences and replace them by their optimal shortest common supersequence until a single string is left) Worse than majority merge. [1]
Genetic algorithms Indications that it might be better than majority merge. [1]
4 Implemented heuristics
4.1 The trivial solution
The trivial solution is at most || times the optimal solution length and is obtained by concatenating the concatenation of all characters in sigma as many times as the longest sequence. That is, if = {a, b, c} and the longest input sequence is of length 4 we get abcabcabcabc.
4.2 Majority merge heuristic
The Majority merge heuristic builds up a supersequence from the empty sequence (S) in the following way:
WHILE there are non-empty input sequences
s <- The most frequent symbol at the start of non-empty input-sequences.
Add s to the end of S.
Remove s from the beginning of each input sequence that starts with s.
END WHILE
Majority merge performs very well when the number of sequences is large compared to the alphabet size.
5 My approach - Local search
My approach was to apply a local search heuristic to the SCS problem and compare it to the Majority merge heuristic to see if it might do better in the case when the alphabet size is larger than the number of sequences.
Since the length of a valid supersequence may vary and any change to the supersequence may give an invalid string a direct representation of a supersequence as a feasible solution is not an option.
I chose to view a feasible solution (S) as a sequence of mappings x1...xSl where Sl is the sum of the lengths of all sequences and xi is a mapping to a sequencenumber and an index.
That means, if L={{s1,1...s1,m1}, {s2,1...s2,m2} ...{sn,1...s3,mn}} is the set of input sequences and L(i) is the ith sequence the mappings are represented like this:
xi {k, l}, where k L and l L(k)
To be sure that any solution is valid we need to introduce the following constraints:
Every symbol in every sequence may only have one xi mapped to it.
If xi ss,k and xj ss,l and k < l then i < j.
If xi ss,k and xj ss,l and k > l then i > j.
The second constraint enforces that the order of each sequence is preserved but not its position in S. If we have two mappings xi and xj then we may only exchange mappings between them if they map to different sequences.
5.1 The initial solution
There are many ways to choose an initial solution. As long as the order of the sequences are preserved it is valid. I chose not to in some way randomize a solution but try two very different solution-types and compare them.
The first one is to create an initial solution by simply concatenating all the sequences.
The second one is to interleave the sequences one symbol at a time. That is to start with the first symbol of every sequence then, in the same order, take the second symbol of every sequence and so on.
5.2 Local change and the neighbourhood
A local change is done by exchanging two mappings in the solution.
One way of doing the iteration is to go from i to Sl and do the best exchange for each mapping.
Another way is to try to exchange the mappings in the order they are defined by the sequences. That is, first exchange s1,1, then s2,1. That is what we do.
There are two variants I have tried.
In the first one, if a single mapping exchange does not yield a better value I return otherwise I go on.
In the second one, I seperately for each sequence do as many exchanges as there are sequences so a symbol in each sequence will have a possibility of moving. The exchange that gives the best value I keep and if that value is worse than the value of the last step in the algorithm I return otherwise I go on.
A symbol may move any number of position to the left or to the right as long as the exchange does not change the order of the original sequences.
The neighbourhood in the first variant is the number of valid exchanges that can be made for the symbol. In the second variant it is the sum of valid exchanges of each symbol after the previous symbol has been exchanged.
5.3 Evaluation
Since the length of the solution is always constant it has to be compressed before the real length of the solution may be obtained.
The solution S, which consists of mappings is converted to a string by using the symbols each mapping points to. A new, initialy empty, solution T is created. Then this algorithm is performed:
T = {}
FOR i = 0 TO Sl
found = FALSE
FOR j = 0 TO |L|
IF first symbol in L(j) = the symbol xi maps to THEN
Remove first symbol from L(j)
found = TRUE
END IF
END FOR
IF found = TRUE THEN
Add the symbol xi maps to to the end of T
END IF
END FOR
Sl is as before the sum of the lengths of all sequences. L is the set of all sequences and L(j) is sequence number j.
The value of the solution S is obtained as |T|.
With Many Many Thanks to : Andreas Westling

algorithms for fast string approximate matching

Given a source string s and n equal length strings, I need to find a quick algorithm to return those strings that have at most k characters that are different from the source string s at each corresponding position.
What is a fast algorithm to do so?
PS: I have to claim that this is a academic question. I want to find the most efficient algorithm if possible.
Also I missed one very important piece of information. The n equal length strings form a dictionary, against which many source strings s will be queried upon. There seems to be some sort of preprocessing step to make it more efficient.
My gut instinct is just to iterate over each String n, maintaining a counter of how many characters are different than s, but I'm not claiming it is the most efficient solution. However it would be O(n) so unless this is a known performance problem, or an academic question, I'd go with that.
Sedgewick in his book "Algorithms" writes that Ternary Search Tree allows "to locate all words within a given Hamming distance of a query word". Article in Dr. Dobb's
Given that the strings are fixed length, you can compute the Hamming distance between two strings to determine the similarity; this is O(n) on the length of the string. So, worst case is that your algorithm is O(nm) for comparing your string against m words.
As an alternative, a fast solution that's also a memory hog is to preprocess your dictionary into a map; keys are a tuple (p, c) where p is the position in the string and c is the character in the string at that position, values are the strings that have characters at that position (so "the" will be in the map at {(0, 't'), "the"}, {(1, 'h'), "the"}, {(2, 'e'), "the"}). To query the map, iterate through query string's characters and construct a result map with the retrieved strings; keys are strings, values are the number of times the strings have been retrieved from the primary map (so with the query string "the", the key "thx" will have a value of 2, and the key "tee" will have a value of 1). Finally, iterate through the result map and discard strings whose values are less than K.
You can save memory by discarding keys that can't possibly equal K when the result map has been completed. For example, if K is 5 and N is 8, then when you've reached the 4th-8th characters of the query string you can discard any retrieved strings that aren't already in the result map since they can't possibly have 5 matching characters. Or, when you've finished with the 6th character of the query string, you can iterate through the result map and remove all keys whose values are less than 3.
If need be you can offload the primary precomputed map to a NoSql key-value database or something along those lines in order to save on main memory (and also so that you don't have to precompute the dictionary every time the program restarts).
Rather than storing a tuple (p, c) as the key in the primary map, you can instead concatenate the position and character into a string (so (5, 't') becomes "5t", and (12, 'x') becomes "12x").
Without knowing where in each input string the match characters will be, for a particular string, you might need to check every character no matter what order you check them in. Therefore it makes sense to just iterate over each string character-by-character and keep a sum of the total number of mismatches. If i is the number of mismatches so far, return false when i == k and true when there are fewer than k-i unchecked characters remaining in the string.
Note that depending on how long the strings are and how many mismatches you'll allow, it might be faster to iterate over the whole string rather than performing these checks, or perhaps to perform them only after every couple characters. Play around with it to see how you get the fastest performance.
My method if we're thinking out loud :P I can't see a way to do this without going through each n string, but I'm happy to be corrected. On that it would begin with a pre-process to save a second set of your n strings so that the characters are in ascending order.
The first part of the comparison would then be to check each n string a character at a time say n' to each character in s say s'.
If s' is less than n' then not equal and move to the next s'. If n' is less than s' then go to next n'. Otherwise record a matching character. Repeat this until k miss matches are found or the alternate matches are found and mark n accordingly.
For further consideration, an added pre-processing could be done on each adjacent string in n to see the total number of characters that differ. This could then be used when comparing strings n to s and if sufficient difference exist between these and the adjacent n there may not be a need to compare it?

complexity of constructing an inverted index list

Given n strings S1, S2, ..., Sn, and an alphabet set A={a_1,a_2,....,a_m}. Assume that the alphabets in each string are all distinct. Now I want to create an inverted-index for each a_i (i=1,2...,m). My inverted-index has also something special: The alphabets in A are in some sequential order, if in the inverted-index a_i has included one string (say S_2), then a_j (j=i+1,i+2,...,m) don't need to include S_2 any more. In short, every string just appears in the inverted list only once. My question is how to build such list in a fast and efficient way? Any time complexity is bounded?
For example, A={a,b,e,g}, S1={abg}, S2={bg}, S3={gae}, S4={g}. Then my inverted-list should be:
a: S1,S3
b: S2 (since S1 has appeared previously, so we don't need to include it here)
e:
g: S4
If I understand your question correctly, a straightforward solution is:
for each string in n strings
find the "smallest" character in the string
put the string in the list for the character
The complexity is proportional to the total length of the strings, multiplying by a constant for the order testing.
If there is a simple way for testing, (e.g. the characters are in alphabetical order and all lower-case, a < will be enough), simply compare them; otherwise, I suggest using a hash table, each pair of which is a character and its order, later simply compare them.

Resources