how would i look for the shortest unique subsequence from a set of words in python? - string

If i have a set of similar words such as:
\bigoplus
\bigotimes
\bigskip
\bigsqcup
\biguplus
\bigvee
\bigwedge
...
\zebra
\zeta
i would like to find the shortest unique set of letters that would characterize each word uniquely
i.e.
\bigop:
\bigoplus
\bigot:
\bigotimes
\bigsk:
\bigskip
EDIT: notice the unique sequence identifier always starts from the begining of the word. I writting an app that gives snippet suggestions when typing. So in general users will start typing from the start of the word
and so on, the sequence needs only be as long as is enough to characterize a word uniquely.
EDIT: but needs to start from the begining of the word.
The characterization always begins from the beginning of the word.
My thoughts:
i was thinking of sorting the words, and grouping based on the fist alphabetical letter, then probably use a longest common subsequence algorithm to find the longest subsequence in common, take its length and use length+1 chars for that unique substring, but im stuck since the algorithms i know for longest subsequence will usually only take two parameters at a time, and i may have more than two words in each group starting with a particular alphabetical letter.
Im i solving an already solved probelem? google was no help.

I'm assuming you want to find the prefixes that uniquely identify the strings, because if you could pick any subsequence, then for example om would be enough to identify \bigotimes in your example.
You can make use of the fact that for a given word, the word with the longest common prefix will be adjacent to it in lexicographical order.
Since your dictionary seems to be sorted already, you can figure out the solution for every word by finding the longest prefix that disambiguates it from both its neighbors.
Example:
>>> lst = r"""
... \bigoplus
... \bigotimes
... \bigskip
... \bigsqcup
... \biguplus
... \bigvee
... \bigwedge
... """.split()
>>> lst.sort() # necessary if lst is not already sorted
>>> lst = [""] + lst + [""]
>>> def cp(x): return len(os.path.commonprefix(x))
...
>>> { lst[i]: 1 + max(cp(lst[i-1:i+1]), cp(lst[i:i+2])) for i in range(1,len(lst)-1) }
{'\\bigvee': 5,
'\\bigsqcup': 6,
'\\biguplus': 5,
'\\bigwedge': 5,
'\\bigotimes': 6,
'\\bigoplus': 6,
'\\bigskip': 6}
The numbers indicate how long the minimal uniquely identifying prefix of a word is.

Thought I'd dump this here since it was the most similar to a question I was about to ask:
Looking for a better solution (will report back when I find one) to iterating through a sequence of strings, trying to map the shortest unique string for/to each.
For example, in a sequence of:
['blue', 'black', 'bold']
# 'blu' --> 'blue'
# 'bla' --> 'black'
# 'bo' --> 'bold'
Looking to improve upon my first, feeble solution. Here's what I came up with:
# Note: Iterating through the keys in a dict, mapping shortest
# unique string to the original string.
shortest_unique_strings = {}
for k in mydict:
for ix in range(len(k)):
# When the list-comp only has one item.
# 'key[:ix+1]' == the current substring
if len([key for key in mydict if key.startswith(key[:ix+1])]) == 1:
shortest_unique_strings[key[:ix+1]] = k
break
Note: On improving efficiency: we should be able to remove those keys/strings that have already been found, so that successive searches don't have to repeat on those items.
Note: I specifically refrained from creating/using any functions outside of built-ins.

Related

Python - how to recursively search a variable substring in texts that are elements of a list

let me explain better what I mean in the title.
Examples of strings where to search (i.e. strings of variable lengths
each one is an element of a list; very large in reality):
STRINGS = ['sftrkpilotndkpilotllptptpyrh', 'ffftapilotdfmmmbtyrtdll', 'gftttepncvjspwqbbqbthpilotou', 'htfrpilotrtubbbfelnxcdcz']
The substring to find, which I know is for sure:
contained in each element of STRINGS
is also contained in a SOURCE string
is of a certain fixed LENGTH (5 characters in this example).
SOURCE = ['gfrtewwxadasvpbepilotzxxndffc']
I am trying to write a Python3 program that finds this hidden word of 5 characters that is in SOURCE and at what position(s) it occurs in each element of STRINGS.
I am also trying to store the results in an array or a dictionary (I do not know what is more convenient at the moment).
Moreover, I need to perform other searches of the same type but with different LENGTH values, so this value should be provided by a variable in order to be of more general use.
I know that the first point has been already solved in previous posts, but
never (as far as I know) together with the second point, which is the part of the code I could not be able to deal with successfully (I do not post my code because I know it is just too far from being fixable).
Any help from this great community is highly appreciated.
-- Maurizio
You can iterate over the source string and for each sub-string use the re module to find the positions within each of the other strings. Then if at least one occurrence was found for each of the strings, yield the result:
import re
def find(source, strings, length):
for i in range(len(source) - length):
sub = source[i:i+length]
positions = {}
for s in strings:
# positions[s] = [m.start() for m in re.finditer(re.escape(sub), s)]
positions[s] = [i for i in range(len(s)) if s.startswith(sub, i)] # Using built-in functions.
if not positions[s]:
break
else:
yield sub, positions
And the generator can be used as illustrated in the following example:
import pprint
pprint.pprint(dict(find(
source='gfrtewwxadasvpbepilotzxxndffc',
strings=['sftrkpilotndkpilotllptptpyrh',
'ffftapilotdfmmmbtyrtdll',
'gftttepncvjspwqbbqbthpilotou',
'htfrpilotrtubbbfelnxcdcz'],
length=5
)))
which produces the following output:
{'pilot': {'ffftapilotdfmmmbtyrtdll': [5],
'gftttepncvjspwqbbqbthpilotou': [21],
'htfrpilotrtubbbfelnxcdcz': [4],
'sftrkpilotndkpilotllptptpyrh': [5, 13]}}

Doubts about string

So, I'm doing an exercise using python, and I tried to use the terminal to do step by step to understand what's happening but I didn't.
I want to understand mainly why the conditional return just the index 0.
Looking 'casino' in [Casinoville].lower() isn't the same thing?
Exercise:
Takes a list of documents (each document is a string) and a keyword.
Returns list of the index values into the original list for all documents containing the keyword.
Exercise solution
def word_search(documents, keyword):
indices = []
for i, doc in enumerate(documents):
tokens = doc.split()
normalized = [token.rstrip('.,').lower() for token in tokens]
if keyword.lower() in normalized:
indices.append(i)
return indices
My solution
def word_search(documents, keyword):
return [i for i, word in enumerate(doc_list) if keyword.lower() in word.rstrip('.,').lower()]
Run
>>> doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"]
Expected output
>>> word_search(doc_list, 'casino')
>>> [0]
Actual output
>>> word_search(doc_list, 'casino')
>>> [0, 2]
Let's try to understand the difference.
The "result" function can be written with list-comprehension:
def word_search(documents, keyword):
return [i for i, word in enumerate(documents)
if keyword.lower() in
[token.rstrip('.,').lower() for token in word.split()]]
The problem happens with the string : "Casinoville" at index 2.
See the output:
print([token.rstrip('.,').lower() for token in doc_list[2].split()])
# ['casinoville']
And here is the matter: you try to ckeck if a word is in the list. The answer is True only if all the string matches (this is the expected output).
However, in your solution, you only check if a word contains a substring. In this case, the condition in is on the string itself and not the list.
See it:
# On the list :
print('casino' in [token.rstrip('.,').lower() for token in doc_list[2].split()])
# False
# On the string:
print('casino' in [token.rstrip('.,').lower() for token in doc_list[2].split()][0])
# True
As result, in the first case, "Casinoville" isn't included while it is in the second one.
Hope that helps !
The question is "Returns list of the index values into the original list for all documents containing the keyword".
you need to consider word only.
In "Casinoville" case, word "casino" is not in, since this case only have word "Casinoville".
When you use the in operator, the result depends on the type of object on the right hand side. When it's a list (or most other kinds of containers), you get an exact membership test. So 'casino' in ['casino'] is True, but 'casino' in ['casinoville'] is False because the strings are not equal.
When the right hand side of is is a string though, it does something different. Rather than looking for an exact match against a single character (which is what strings contain if you think of them as sequences), it does a substring match. So 'casino' in 'casinoville' is True, as would be casino in 'montecasino' or 'casino' in 'foocasinobar' (it's not just prefixes that are checked).
For your problem, you want exact matches to whole words only. The reference solution uses str.split to separate words (the with no argument it splits on any kind of whitespace). It then cleans up the words a bit (stripping off punctuation marks), then does an in match against the list of strings.
Your code never splits the strings you are passed. So when you do an in test, you're doing a substring match on the whole document, and you'll get false positives when you match part of a larger word.

How to find sentence clauses that match word sequences? python

I have a large number of sentences from which I want to extract clauses/ segments that match certain word combinations. I have the following code that works, but it only works with one string of one word. I cannot find a way to extend it to work with multiple strings and strings of two words. I thought this was simple and asked by others before me, but could not find the answer. Can anybody help me?
This is my code:
import pandas as pd
df = pd.read_csv('text.csv')
identifiers = ('what')
sentence = df['A']
for i in sentence:
i = i.split()
if identifiers in i:
index = i.index(identifiers)
print(i[index:])
Give a sentence like this:
"Given that I want to become an entrepreneur, I am wondering what collage to attend."
and a list of two-word identifiers such as this:
identifiers = [('I am', 'I can' ..., 'I will')] # There could be dozens
how can I achieve a result like this?
I am wondering what collage to attend.
I tried: extending the code above, using isin() and something like if any([x in i for x in identifiers]) but no solution. Any suggestions?
It does not work for multiple-word phrases because you used split. Since it splits on spaces (by default), logically there won't be any single element left containing a space.
You can use in immediately to test if a certain string contains any other:
>>> sentence = "Given that I want to become an entrepreneur, I am wondering what collage to attend."
>>> identifiers = ['I am', 'I can', 'I will']
>>> for i in identifiers:
... if i in sentence:
... print (sentence[sentence.index(i):])
...
I am wondering what collage to attend.
Your attempt any([x in sentence for x in identifiers]), for these strings, shows
[True, False, False]
and while it gives some useful result, but still not the index, it would require another loop over this result to actually print the index. (And the any part is not necessary unless you specifically and only want to know if a sentence contains such a phrase.)
But the [x in sentence ..] list comprehension only yields a list of True and False, with which you cannot do anything, so it's a dead end.
But it suggests an alternative:
>>> [sentence.index(x) for x in identifiers if x in sentence]
[45]
which leads us to a list of results:
>>> [sentence[sentence.index(x):] for x in identifiers if x in sentence]
['I am wondering what collage to attend.']
If you add 'I want' to your list of identifiers, you still get a correct result, now consisting of two sentence fragments (both all the way up to the end):
['I am wondering what collage to attend.', 'I want to become an entrepreneur, I am wondering what collage to attend.']
(For fun and while I'm at it: if you want to clip off the excess at the first comma, add a regexp that matches everything except a comma:
>>> [re.match(r'^([^,]+)', sentence[sentence.index(x):]).groups(0)[0] for x in identifiers if x in sentence]
['I am wondering what collage to attend.', 'I want to become an entrepreneur']
Never mind the groups(0)[0] part at the end of that regex, it's just to coerce the SRE_Match object back into a regular string.)

algorithms for fast string approximate matching

Given a source string s and n equal length strings, I need to find a quick algorithm to return those strings that have at most k characters that are different from the source string s at each corresponding position.
What is a fast algorithm to do so?
PS: I have to claim that this is a academic question. I want to find the most efficient algorithm if possible.
Also I missed one very important piece of information. The n equal length strings form a dictionary, against which many source strings s will be queried upon. There seems to be some sort of preprocessing step to make it more efficient.
My gut instinct is just to iterate over each String n, maintaining a counter of how many characters are different than s, but I'm not claiming it is the most efficient solution. However it would be O(n) so unless this is a known performance problem, or an academic question, I'd go with that.
Sedgewick in his book "Algorithms" writes that Ternary Search Tree allows "to locate all words within a given Hamming distance of a query word". Article in Dr. Dobb's
Given that the strings are fixed length, you can compute the Hamming distance between two strings to determine the similarity; this is O(n) on the length of the string. So, worst case is that your algorithm is O(nm) for comparing your string against m words.
As an alternative, a fast solution that's also a memory hog is to preprocess your dictionary into a map; keys are a tuple (p, c) where p is the position in the string and c is the character in the string at that position, values are the strings that have characters at that position (so "the" will be in the map at {(0, 't'), "the"}, {(1, 'h'), "the"}, {(2, 'e'), "the"}). To query the map, iterate through query string's characters and construct a result map with the retrieved strings; keys are strings, values are the number of times the strings have been retrieved from the primary map (so with the query string "the", the key "thx" will have a value of 2, and the key "tee" will have a value of 1). Finally, iterate through the result map and discard strings whose values are less than K.
You can save memory by discarding keys that can't possibly equal K when the result map has been completed. For example, if K is 5 and N is 8, then when you've reached the 4th-8th characters of the query string you can discard any retrieved strings that aren't already in the result map since they can't possibly have 5 matching characters. Or, when you've finished with the 6th character of the query string, you can iterate through the result map and remove all keys whose values are less than 3.
If need be you can offload the primary precomputed map to a NoSql key-value database or something along those lines in order to save on main memory (and also so that you don't have to precompute the dictionary every time the program restarts).
Rather than storing a tuple (p, c) as the key in the primary map, you can instead concatenate the position and character into a string (so (5, 't') becomes "5t", and (12, 'x') becomes "12x").
Without knowing where in each input string the match characters will be, for a particular string, you might need to check every character no matter what order you check them in. Therefore it makes sense to just iterate over each string character-by-character and keep a sum of the total number of mismatches. If i is the number of mismatches so far, return false when i == k and true when there are fewer than k-i unchecked characters remaining in the string.
Note that depending on how long the strings are and how many mismatches you'll allow, it might be faster to iterate over the whole string rather than performing these checks, or perhaps to perform them only after every couple characters. Play around with it to see how you get the fastest performance.
My method if we're thinking out loud :P I can't see a way to do this without going through each n string, but I'm happy to be corrected. On that it would begin with a pre-process to save a second set of your n strings so that the characters are in ascending order.
The first part of the comparison would then be to check each n string a character at a time say n' to each character in s say s'.
If s' is less than n' then not equal and move to the next s'. If n' is less than s' then go to next n'. Otherwise record a matching character. Repeat this until k miss matches are found or the alternate matches are found and mark n accordingly.
For further consideration, an added pre-processing could be done on each adjacent string in n to see the total number of characters that differ. This could then be used when comparing strings n to s and if sufficient difference exist between these and the adjacent n there may not be a need to compare it?

find frequency of every word

There is a question asked to me in the interview, but I am not able to answer that.
Question is :
You are given a directed graph in which every node is a character and you are also given a array of strings.
The task is to calculate the frequency of every string in the array by searching in the graph.
My approach : I used trie, Suffix tree, but the interviewer is not fully satisfied. Can you give me an algorithm for the given problem.
How about the following... To find the number of occurrences of a String, s, in a directed graph.
Start with a bread first search (marking already visited nodes to avoid cycles)
When the first character is found, switch to a depth first search with max-depth = length(s)
If the string sequence is detected, increment occurrence count for each occurence of the DFS
Resume the BFS
Some caveats
I do not believe the DFS should share the BFS's visited node list (you may need to go back to the beginning and overlap for example
The BFS should also not shared the DFS visited list. For example, you could be looking for "Alan" and have "AAlan" and make sure you re-start on the second A
Now for an array, I can just repeat this procedure for each string.. Sure there may be more efficient solution, but I'd start off thinking about it this way..
Did your answer include any conversation about a breadth-first or depth-first search? If someone mentioned searching a graph, I'd almost always reply with a variation of one of these
Here's another solution:
First we need to do some preprocessing on the string array.
Let's define C as the subset of all the characters composing all the strings in the array.
For each character in C, we are going to keep track of each string containing that character and its position in that string + a Boolean value stating if its the last char in that string. This can be done using a dictionary.
For example, let's say our array is ['one', 'two', 'three']. Our dictionary would look something like this:
'o': (0, 0, false),(1,2,true)
't': (1, 0, false),(2,0,false)
'n': (0, 1, false)
'e': (2, 3, false),(2,4, true)
'h': (2, 1, false)
'r': (2, 2, false)
'w': (2, 1, false)
Next we are going to use DFS and Dynamic Programming.
Basically, whenever you visit an edge, you check the parent and the child on the dict to see if they compose a substring and you store that information.
Using this method, you can easily detect all recurrence of every string in the array.
Building the preprocessing table can be done in o(L) where L is the sum of the lengths of all the strings in the array.
Discovering all recurrence can be done in O(m * k) where m is the number of edges (and not the number of nodes, as a node can be discovered multiple times) and k is the number of strings.
The implementation can be a little tricky and there are some pitfalls you should avoid.
see this graph, each level has all 4*4 edges(hard to draw, plz stand me)
there may be a lot of occurrences.
i think he may be expecting dynamic programming:
process each string individually, f[i][j] denotes the total numbers to accomplish the string's last j letters starting from node i, the rest would be easy.

Resources