I've a problem with the Python regex fuzzy search.
This is working:
import regex
s = '2991 Nixon Avenue Chattanooga Tennessee'
regex.search(r"(?msi)(?=.*\bnixon\b)(?=.*\bchattanooga\b)",s)
This is not working (removed a t from Chattanooga): result None
import regex
s = '2991 Nixon Avenue Chatanooga Tennessee'
regex.search(r"(?msie)(?=.*\bnixon\b)(?=.*\bchattanooga\b){e=<3}",s)
What am I doing wrong here?
It looks like it's something with the positive lookahead and the word bounderies.
Note: This is just a simple example to get it working. I reality is the part of a more complex job.
Aside, do i need to specify the fuzziness per regex item (nixon, chattanooga) or is it possible to do it for both at the same time e.g. ((?=.*\bnixon)(?=.*\bchattanooga\b)){e=<3}
I was applying the fuzziness to the lookahead itself instead of to its
contents.
If it's "Chattanooga" that's fuzzy, do:
regex.search(r"(?msie)(?=.*\bnixon\b)(?=.*\b(?:chattanooga){e<=3}\b)",s)
Related
I am trying to formnulate a regex to get the ids from the below two strings examples:
/drugs/2/drug-19904-5106/magnesium-oxide-tablet/details
/drugs/2/drug-19906/magnesium-moxide-tablet/details
In the first case, I should get 19904-5106 and in the second case 19906.
So far I tried several, the closes I could get is [drugs/2/drug]-.*\d but would return g-19904-5106 and g-19907.
Please any help to get ride of the "g-"?
Thank you in advance.
When writing a regex expression, consider the patterns you see so that you can align it correctly. For example, if you know that your desired IDs always appear in something resembling ABCD-1234-5678 where 1234-5678 is the ID you want, then you can use that. If you also know that your IDs are always digits, then you can refine the search even more
For your example, using a regex string like
.+?-(\d+(?:-\d+)*)
should do the trick. In a python script that would look something like the following:
match = re.search(r'.+?-(\d+(?:-\d+)*)', my_string)
if match:
my_id = match.group(1)
The pattern may vary depending on the depth and complexity of your examples, but that works for both of the ones you provided
This is the closest I could find: \d+|.\d+-.\d+
I am working on a wordle bot and I am trying to match words using regex. I am stuck at a problem where I need to look for specific permutations of a given word.
For example, if the word is "steal" these are all the permutations:
'tesla', 'stale', 'steal', 'taels', 'leats', 'setal', 'tales', 'slate', 'teals', 'stela', 'least', 'salet'.
I had some trouble creating a regex for this, but eventually stumbled on positive lookaheads which solved the issue. regex -
'(?=.*[s])(?=.*[l])(?=.*[a])(?=.*[t])(?=.*[e])'
But, if we are looking for specific permutations, how do we go about it?
For example words that look like 's[lt]a[lt]e'. The matching words are 'steal', 'stale', 'state'. But I want to limit the count of l and t in the matched word, which means the output should be 'steal' & 'stale'. 1 obvious solution is this regex r'slate|stale', but this is not a general solution. I am trying to arrive at a general solution for any scenario and the use of positive lookahead above seemed like a starting point. But I am unable to arrive at a solution.
Do we combine positive lookaheads with normal regex?
s(?=.*[lt])a(?=.*[lt])e (Did not work)
Or do we write nested lookaheads or something?
A few more regex that did not work -
s(?=.*[lt]a[tl]e)
s(?=.*[lt])(?=.*[a])(?=.*[lt])(?=.*[e])
I tried to look through the available posts on SO, but could not find anything that would help me understand this. Any help is appreciated.
You could append the regex which matches the permutations of interest to your existing regex. In your sample case, you would use:
(?=.*s)(?=.*l)(?=.*a)(?=.*t)(?=.*e)s[lt]a[lt]e
This will match only stale and slate; it won't match state because it fails the lookahead that requires an l in the word.
Note that you don't need the (?=.*s)(?=.*a)(?=.*e) in the above regex as they are required by the part that matches the permutations of interest. I've left them in to keep that part of the regex generic and not dependent on what follows it.
Demo on regex101
Note that to allow for duplicated characters you might want to change your lookaheads to something in this form:
(?=(?:[^s]*s){1}[^s]*)
You would change the quantifier on the group to match the number of occurrences of that character which are required.
I tried to make a matcher which could detect words like
'all-purpose'
I was trying to make a pattern like
pattern=[{'POS':'NOUN'}, {'ORTH':'-'},{'POS':'NOUN'}]
However, I realized that it only find the matches like
'all - purpose' with white space between tokens instead of 'all-purpose'.
How could I make a matcher like this?
It has to be a generalized pattern like noun-noun instead of
specific words like 'Barak Obama' as in the example in spacy documentation
Best,
What exactly are you trying to match? Using en_core_web_sm, "all-purpose" is three tokens and all has the ADV POS tag for me. So that might be the issue with your match pattern. If you just want hyphenated words this might be a better match:
pattern = [{'IS_ALPHA': True}, {'ORTH':'-'}, {'IS_ALPHA': True}]
More generally, you are correct that your pattern will only match three tokens, though that doesn't require white space - it depends on how the tokenizer works. For example, that's has no spaces but is two tokens.
If you are finding hyphenated words that occur as one token and want to match them, you can use regular expressions in Matcher rules. Here's an example ofhow that would work from the docs:
pattern = [{"TEXT": {"REGEX": "deff?in[ia]tely"}}]
In your case it could just look like this:
pattern = [{"TEXT": {"REGEX": "-"}}]
There is one condition where I have to split my string in the manner that all the alphabetic characters should stay as one unit and everything else should be separated like the example shown below.
Example:
Some_var='12/1/20 Balance Brought Forward 150,585.80'
output_var=['12/1/20','Balance Brought Forward','150,585.80']
Yes, you could use some regex to get over this.
Some_var = '12/1/20 Balance Brought Forward 150,585.80'
match = re.split(r"([0-9\s\\\/\.,-]+|[a-zA-Z\s\\\/\.,-]+)", Some_var)
print(match)
You will get some extra spaces but you can trim that and you are good to go.
split isn't gonna cut it. You might wanna look into Regular Expressions (abbreviated regex) to accomplish this.
Here's a link to the Python docs: re module
As for a pattern, you could try using something like this:
([0-9\s\\\/\.,-]+|[a-zA-Z\s\\\/\.,-]+)
then trim each part of the output.
I would like to be able to take a regex and generate conforming data using the python hypothesis library. For example given a regex of
regex = re.compile('[a-zA-Z]')
This would match any english alpha characters. An example generator for this could be.
import hypothesis
import string
hypothesis.strategies.text(alphabet=string.ascii_letters)
But Ideally I want to construct a string that will match any regex passed in.
There's a work in progress pull request for adding this feature. Nothing extant will let you do it easily, but looking at the PR might give you a good idea about how to translate any specific example you need.
Update: the from_regex strategy was added in Hypothesis 3.19.