Python - how to recursively search a variable substring in texts that are elements of a list - python-3.x

let me explain better what I mean in the title.
Examples of strings where to search (i.e. strings of variable lengths
each one is an element of a list; very large in reality):
STRINGS = ['sftrkpilotndkpilotllptptpyrh', 'ffftapilotdfmmmbtyrtdll', 'gftttepncvjspwqbbqbthpilotou', 'htfrpilotrtubbbfelnxcdcz']
The substring to find, which I know is for sure:
contained in each element of STRINGS
is also contained in a SOURCE string
is of a certain fixed LENGTH (5 characters in this example).
SOURCE = ['gfrtewwxadasvpbepilotzxxndffc']
I am trying to write a Python3 program that finds this hidden word of 5 characters that is in SOURCE and at what position(s) it occurs in each element of STRINGS.
I am also trying to store the results in an array or a dictionary (I do not know what is more convenient at the moment).
Moreover, I need to perform other searches of the same type but with different LENGTH values, so this value should be provided by a variable in order to be of more general use.
I know that the first point has been already solved in previous posts, but
never (as far as I know) together with the second point, which is the part of the code I could not be able to deal with successfully (I do not post my code because I know it is just too far from being fixable).
Any help from this great community is highly appreciated.
-- Maurizio

You can iterate over the source string and for each sub-string use the re module to find the positions within each of the other strings. Then if at least one occurrence was found for each of the strings, yield the result:
import re
def find(source, strings, length):
for i in range(len(source) - length):
sub = source[i:i+length]
positions = {}
for s in strings:
# positions[s] = [m.start() for m in re.finditer(re.escape(sub), s)]
positions[s] = [i for i in range(len(s)) if s.startswith(sub, i)] # Using built-in functions.
if not positions[s]:
break
else:
yield sub, positions
And the generator can be used as illustrated in the following example:
import pprint
pprint.pprint(dict(find(
source='gfrtewwxadasvpbepilotzxxndffc',
strings=['sftrkpilotndkpilotllptptpyrh',
'ffftapilotdfmmmbtyrtdll',
'gftttepncvjspwqbbqbthpilotou',
'htfrpilotrtubbbfelnxcdcz'],
length=5
)))
which produces the following output:
{'pilot': {'ffftapilotdfmmmbtyrtdll': [5],
'gftttepncvjspwqbbqbthpilotou': [21],
'htfrpilotrtubbbfelnxcdcz': [4],
'sftrkpilotndkpilotllptptpyrh': [5, 13]}}

Related

Basic string slicing from indices

I will state the obvious that I am a beginner. I should also mention that I have been coding in Zybooks, which affects things. My textbook hasn't helped me much
I tried sub_lyric= rhyme_lyric[ : ]
Zybooks should be able to input an index number can get only that part of the sentence but my book doesnt explain how to do that. If it throws a [4:7] then it would output cow. Hopefully I have exolained everything well.
You need to set there:
sub_lyric = rhyme_lyric[start_index:end_index]
The string is as a sequence of characters and you can use string slicing to extract any sub-text from the main one. As you have observed:
sub_lyric = rhyme_lyric[:]
will copy the entire content of rhyme_lyric to sub_lyric.
To select only a portion of the text, specify the start_index (strings start with index 0) to end_index (not included).
sub_lyric = rhyme_lyric[4:7]
will extract characters in rhyme_lyric from position 4 (included) to position 7 (not included) so the result will be cow.
You can check more on string slicing here: Python 3 introduction

Doubts about string

So, I'm doing an exercise using python, and I tried to use the terminal to do step by step to understand what's happening but I didn't.
I want to understand mainly why the conditional return just the index 0.
Looking 'casino' in [Casinoville].lower() isn't the same thing?
Exercise:
Takes a list of documents (each document is a string) and a keyword.
Returns list of the index values into the original list for all documents containing the keyword.
Exercise solution
def word_search(documents, keyword):
indices = []
for i, doc in enumerate(documents):
tokens = doc.split()
normalized = [token.rstrip('.,').lower() for token in tokens]
if keyword.lower() in normalized:
indices.append(i)
return indices
My solution
def word_search(documents, keyword):
return [i for i, word in enumerate(doc_list) if keyword.lower() in word.rstrip('.,').lower()]
Run
>>> doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"]
Expected output
>>> word_search(doc_list, 'casino')
>>> [0]
Actual output
>>> word_search(doc_list, 'casino')
>>> [0, 2]
Let's try to understand the difference.
The "result" function can be written with list-comprehension:
def word_search(documents, keyword):
return [i for i, word in enumerate(documents)
if keyword.lower() in
[token.rstrip('.,').lower() for token in word.split()]]
The problem happens with the string : "Casinoville" at index 2.
See the output:
print([token.rstrip('.,').lower() for token in doc_list[2].split()])
# ['casinoville']
And here is the matter: you try to ckeck if a word is in the list. The answer is True only if all the string matches (this is the expected output).
However, in your solution, you only check if a word contains a substring. In this case, the condition in is on the string itself and not the list.
See it:
# On the list :
print('casino' in [token.rstrip('.,').lower() for token in doc_list[2].split()])
# False
# On the string:
print('casino' in [token.rstrip('.,').lower() for token in doc_list[2].split()][0])
# True
As result, in the first case, "Casinoville" isn't included while it is in the second one.
Hope that helps !
The question is "Returns list of the index values into the original list for all documents containing the keyword".
you need to consider word only.
In "Casinoville" case, word "casino" is not in, since this case only have word "Casinoville".
When you use the in operator, the result depends on the type of object on the right hand side. When it's a list (or most other kinds of containers), you get an exact membership test. So 'casino' in ['casino'] is True, but 'casino' in ['casinoville'] is False because the strings are not equal.
When the right hand side of is is a string though, it does something different. Rather than looking for an exact match against a single character (which is what strings contain if you think of them as sequences), it does a substring match. So 'casino' in 'casinoville' is True, as would be casino in 'montecasino' or 'casino' in 'foocasinobar' (it's not just prefixes that are checked).
For your problem, you want exact matches to whole words only. The reference solution uses str.split to separate words (the with no argument it splits on any kind of whitespace). It then cleans up the words a bit (stripping off punctuation marks), then does an in match against the list of strings.
Your code never splits the strings you are passed. So when you do an in test, you're doing a substring match on the whole document, and you'll get false positives when you match part of a larger word.

How to find sentence clauses that match word sequences? python

I have a large number of sentences from which I want to extract clauses/ segments that match certain word combinations. I have the following code that works, but it only works with one string of one word. I cannot find a way to extend it to work with multiple strings and strings of two words. I thought this was simple and asked by others before me, but could not find the answer. Can anybody help me?
This is my code:
import pandas as pd
df = pd.read_csv('text.csv')
identifiers = ('what')
sentence = df['A']
for i in sentence:
i = i.split()
if identifiers in i:
index = i.index(identifiers)
print(i[index:])
Give a sentence like this:
"Given that I want to become an entrepreneur, I am wondering what collage to attend."
and a list of two-word identifiers such as this:
identifiers = [('I am', 'I can' ..., 'I will')] # There could be dozens
how can I achieve a result like this?
I am wondering what collage to attend.
I tried: extending the code above, using isin() and something like if any([x in i for x in identifiers]) but no solution. Any suggestions?
It does not work for multiple-word phrases because you used split. Since it splits on spaces (by default), logically there won't be any single element left containing a space.
You can use in immediately to test if a certain string contains any other:
>>> sentence = "Given that I want to become an entrepreneur, I am wondering what collage to attend."
>>> identifiers = ['I am', 'I can', 'I will']
>>> for i in identifiers:
... if i in sentence:
... print (sentence[sentence.index(i):])
...
I am wondering what collage to attend.
Your attempt any([x in sentence for x in identifiers]), for these strings, shows
[True, False, False]
and while it gives some useful result, but still not the index, it would require another loop over this result to actually print the index. (And the any part is not necessary unless you specifically and only want to know if a sentence contains such a phrase.)
But the [x in sentence ..] list comprehension only yields a list of True and False, with which you cannot do anything, so it's a dead end.
But it suggests an alternative:
>>> [sentence.index(x) for x in identifiers if x in sentence]
[45]
which leads us to a list of results:
>>> [sentence[sentence.index(x):] for x in identifiers if x in sentence]
['I am wondering what collage to attend.']
If you add 'I want' to your list of identifiers, you still get a correct result, now consisting of two sentence fragments (both all the way up to the end):
['I am wondering what collage to attend.', 'I want to become an entrepreneur, I am wondering what collage to attend.']
(For fun and while I'm at it: if you want to clip off the excess at the first comma, add a regexp that matches everything except a comma:
>>> [re.match(r'^([^,]+)', sentence[sentence.index(x):]).groups(0)[0] for x in identifiers if x in sentence]
['I am wondering what collage to attend.', 'I want to become an entrepreneur']
Never mind the groups(0)[0] part at the end of that regex, it's just to coerce the SRE_Match object back into a regular string.)

Converting lists of digits stored as strings into integers Python 2.7

Among other things, my project requires the retrieval of distance information from file, converting the data into integers, then adding them to a 128 x 128 matrix.
I am at an impasse while reading the data from line.
I retrieve it with:
distances = []
with open(filename, 'r') as f:
for line in f:
if line[0].isdigit():
distances.extend(line.splitlines())`
This produces a list of strings.
while
int(distances) #does not work
int(distances[0]) # produces the correct integer when called through console
However, the spaces foobar the procedure later on.
An example of list:
['966']['966', '1513' 2410'] # the distance list increases with each additional city. The first item is actually the distance of the second city from the first. The second item is the distance of the third city from the first two.
int(distances[0]) #returns 966 in console. A happy integer for the matrix. However:
int(distances[1]) # returns:
Traceback (most recent call last):
File "", line 1, in
ValueError: invalid literal for int() with base 10: '1513 2410'
I have a slight preference for more pythonic solutions, like list comprehension and the like, but in reality- any and all help is greatly appreciated.
Thank you for your time.
All the information you get from a file is a string at first. You have to parse the information and convert it to different types and formats in your program.
int(distances) does not work because, as you have observed, distances is a list of strings. You cannot convert an entire list to an integer. (What would be the correct answer?)
int(distances[0]) works because you are converting only the first string to an integer, and the string represents an integer so the conversion works.
int(distances[1]) doesn't work because, for some reason, there is no comma between the 2nd and 3rd element of your list, so it is implicitly concatenated to the string 1513 2410. This cannot be converted to an integer because it has a space.
There are a few different solutions that might work for you, but here are a couple of obvious ones for your use case:
distance.extend([int(elem) for elem in line.split()])
This will only work if you are certain every element of the list returned by line.split() can undergo this conversion. You can also do the whole distance list later all at once:
distance = [int(d) for d in distance]
or
distance = map(int, distance)
You should try a few solutions out and implement the one you feel gives you the best combination of working correctly and readability.
My guess is you want to split on all whitespace, rather than newlines. If the file's not large, just read it all in:
distances = map(int, open('file').read().split())
If some of the values aren't numeric:
distances = (int(word) for word in open('file').read().split() if word.isdigit())
If the file is very large, use a generator to avoid reading it all at once:
import itertools
with open('file') as dists:
distances = itertools.chain.from_iterable((int(word) for word in line.split()) for line in dists)

how would i look for the shortest unique subsequence from a set of words in python?

If i have a set of similar words such as:
\bigoplus
\bigotimes
\bigskip
\bigsqcup
\biguplus
\bigvee
\bigwedge
...
\zebra
\zeta
i would like to find the shortest unique set of letters that would characterize each word uniquely
i.e.
\bigop:
\bigoplus
\bigot:
\bigotimes
\bigsk:
\bigskip
EDIT: notice the unique sequence identifier always starts from the begining of the word. I writting an app that gives snippet suggestions when typing. So in general users will start typing from the start of the word
and so on, the sequence needs only be as long as is enough to characterize a word uniquely.
EDIT: but needs to start from the begining of the word.
The characterization always begins from the beginning of the word.
My thoughts:
i was thinking of sorting the words, and grouping based on the fist alphabetical letter, then probably use a longest common subsequence algorithm to find the longest subsequence in common, take its length and use length+1 chars for that unique substring, but im stuck since the algorithms i know for longest subsequence will usually only take two parameters at a time, and i may have more than two words in each group starting with a particular alphabetical letter.
Im i solving an already solved probelem? google was no help.
I'm assuming you want to find the prefixes that uniquely identify the strings, because if you could pick any subsequence, then for example om would be enough to identify \bigotimes in your example.
You can make use of the fact that for a given word, the word with the longest common prefix will be adjacent to it in lexicographical order.
Since your dictionary seems to be sorted already, you can figure out the solution for every word by finding the longest prefix that disambiguates it from both its neighbors.
Example:
>>> lst = r"""
... \bigoplus
... \bigotimes
... \bigskip
... \bigsqcup
... \biguplus
... \bigvee
... \bigwedge
... """.split()
>>> lst.sort() # necessary if lst is not already sorted
>>> lst = [""] + lst + [""]
>>> def cp(x): return len(os.path.commonprefix(x))
...
>>> { lst[i]: 1 + max(cp(lst[i-1:i+1]), cp(lst[i:i+2])) for i in range(1,len(lst)-1) }
{'\\bigvee': 5,
'\\bigsqcup': 6,
'\\biguplus': 5,
'\\bigwedge': 5,
'\\bigotimes': 6,
'\\bigoplus': 6,
'\\bigskip': 6}
The numbers indicate how long the minimal uniquely identifying prefix of a word is.
Thought I'd dump this here since it was the most similar to a question I was about to ask:
Looking for a better solution (will report back when I find one) to iterating through a sequence of strings, trying to map the shortest unique string for/to each.
For example, in a sequence of:
['blue', 'black', 'bold']
# 'blu' --> 'blue'
# 'bla' --> 'black'
# 'bo' --> 'bold'
Looking to improve upon my first, feeble solution. Here's what I came up with:
# Note: Iterating through the keys in a dict, mapping shortest
# unique string to the original string.
shortest_unique_strings = {}
for k in mydict:
for ix in range(len(k)):
# When the list-comp only has one item.
# 'key[:ix+1]' == the current substring
if len([key for key in mydict if key.startswith(key[:ix+1])]) == 1:
shortest_unique_strings[key[:ix+1]] = k
break
Note: On improving efficiency: we should be able to remove those keys/strings that have already been found, so that successive searches don't have to repeat on those items.
Note: I specifically refrained from creating/using any functions outside of built-ins.

Resources