How to simplify the function which finds homographs? - python-3.x

I wrote the function which finds homographs in a text.
A homograph is a word that shares the same written form as another
word but has a different meaning.
For this I've used POS-Tagger from NLTK(pos_tag).
POS-tagger processes a sequence of words, and attaches a part of
speech tag to each word.
For example:
[('And', 'CC'), ('now', 'RB'), ('for', 'IN'), ('something', 'NN'),
('completely', 'RB'), ('different', 'JJ')].
Code(Edited):
def find_homographs(text):
homographs_dict = {}
if isinstance(text, str):
text = word_tokenize(text)
tagged_tokens = pos_tag(text)
for tag1 in tagged_tokens:
for tag2 in tagged_tokens:
try:
if dict1[tag2] == tag1:
continue
except KeyError:
if tag1[0] == tag2[0] and tag1[1] != tag2[1]:
dict1[tag1] = tag2
return homographs_dict
It works, But takes too much time, because I've used two cycles for. Please, advice me how can I simplify it and make much faster.

It may seem counterintuitive, but you can easily collect all POS tags for each word in your text, then keep just the words that have multiple tags.
from collections import defaultdict
alltags = defaultdict(set)
for word, tag in tagged_tokens:
alltags[word].add(tag)
homographs = dict((w, tags) for w, tags in alltags.items() if len(tags) > 1)
Note the two-variable loop; it's a lot handier than writing tag1[0] and tag1[1]. defaultdict (and set) you'll have to look up in the manual.
Your output format cannot handle words with three or more POS tags, so the dictionary homographs has words as keys and sets of POS tags as values.
And two more things I would advise: (1) convert all words to lower case to catch more "homographs"; and (2) nltk.pos_tag() expects to be called on one sentence at a time, so you'll get more correct tags if you sent_tokenize() your text and word_tokenize() and pos_tag() each sentence separately.

Here is a suggestion (not tested) but the main idea is to build a dictionary when parsing tagged_tokens, to identify homographs in non-nested loop:
temp_dict = dict()
for tag in tagged_tokens:
temp_dict[tag[0]] = temp_dict.get(tag[0],list()).append(tag[1])
for temp in temp_dict.items():
if len(temp[1]) == 1:
del temp_dict[temp [0]]
print (temp_dict)

Related

Ignoring filler words in part of speech pattern NLTK

I have rule based text matching program that I've written that operates based on rules created using specific POS patterns. So for example one rule is:
pattern = [('PRP', "i'll"), ('VB', ('jump', 'play', 'bite', 'destroy'))]
In this case when analyzing my input text this will only return results in a string that fit grammatically to this specific pattern so:
I'll jump
I'll play
I'll bite
I'll destroy
My question involves extracting the same meaning of the text when people use the same text but add a superlative or any type of word that doesn't change context, right now it only does exact matches, but wont catch phrases like the first string in this example:
I'll 'freaking' jump
'Dammit' I'll play
I'll play 'dammit'
The word doesn't have have to be specific its just making sure the program can still identify the same pattern with the addition of a non-contextual superlative or any other type of word with the same purpose. This is the flagger I've written and I've given an example string:
string_list = [('Its', 'PRP$'), ('annoying', 'NN'), ('when', 'WRB'), ('a', 'DT'), ('kid', 'NN'), ('keeps', 'VBZ'), ('asking', 'VBG'), ('you', 'PRP'), ('to', 'TO'), ('play', 'VB'), ('but', 'CC'), ("I'll", 'NNP'), ('bloody', 'VBP'), ('play', 'VBP'), ('so', 'RB'), ('it', 'PRP'), ('doesnt', 'VBZ'), ('cry', 'NN')]
def find_match_pattern(string_list, pattern_dict):
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer() # does a sentiment analysis on the output string
filt_ = ['Filter phrases'] # not the patterns just phrases I know I dont want
filt_tup = [x.lower() for x in filt_]
for rule, pattern in pattern_dict.items(): # pattern dict is an Ordered Diction courtesy of collections
num_matched = 0
for idx, tuple in enumerate(string_list): # string_list is the input string that has been POS tagged
matched = False
if tuple[1] == list(pattern.keys())[num_matched]:
if tuple[0] in pattern[tuple[1]]:
num_matched += 1
else:
num_matched = 0
else:
num_matched = 0
if num_matched == len(pattern): # if the number of matching words equals the length of the pattern do this
matched_string = ' '.join([i[0] for i in string_list]) # Joined for the sentiment analysis score
vs = analyzer.polarity_scores(matched_string)
sentiment = vs['compound']
if matched_string in filt_tup:
break
elif (matched_string not in filt_tup) or (sentiment < -0.8):
matched = True
print(matched, '\n', matched_string, '\n', sentiment)
return (matched, sentiment, matched_string, rule)
I know its a really abstract (or down the rabbit hole) question, so it may be a discussion but if anyone has experience with this it would be awesome to see what you recommend.
Your question can be answered using Spacy's dependecy tagger. Spacy provides a matcher with many optional and switchable options.
In the case below, instead of basing on specific words or Parts of Speech, the focus was looking at certain sintatic functions, such as the nominal subject and the auxiliary verbs.
Here's a quick example:
import spacy
from spacy.matcher import Matcher
nlp = spacy.load('en')
matcher = Matcher(nlp.vocab, validate=True)
pattern = [{'DEP': 'nsubj', 'OP': '+'}, # OP + means it has to be at least one nominal subject - usually a pronoun
{'DEP': 'aux', 'OP': '?'}, # OP ? means it can have one or zero auxiliary verbs
{'POS': 'ADV', 'OP': '?'}, # Now it looks for an adverb. Also, it is not needed (OP?)
{'POS': 'VERB'}] # Finally, I've generallized it with a verb, but you can make one pattern for each verb or write a loop to do it.
matcher.add("NVAV", None, pattern)
phrases = ["I\'ll really jump.",
"Okay, I\'ll play.",
"Dammit I\'ll play",
"I\'ll play dammit",
"He constantly plays it",
"She usually works there"]
for phrase in phrases:
doc = nlp(phrase)
matches = matcher(doc)
for match_id, start, end in matches:
span = doc[start:end]
print('Matched:',span.text)
Matched: I'll really jump
Matched: I'll play
Matched: I'll play
Matched: I'll play
Matched: He constantly plays
Matched: She usually works
You can always test your patterns in the live example: Spacy Live Example
You can extend it as you will. Read more here:https://spacy.io/usage/rule-based-matching

Convert everything in a dictionary to lower case, then filter on it?

import pandas as pd
import nltk
import os
directory = os.listdir(r"C:\...")
x = []
num = 0
for i in directory:
x.append(pd.read_fwf("C:\\..." + i))
x[num] = x[num].to_string()
So, once I have a dictionary x = [ ] populated by the read_fwf for each file in my directory:
I want to know how to make it so every single character is lowercase. I am having trouble understanding the syntax and how it is applied to a dictionary.
I want to define a filter that I can use to count for a list of words in this newly defined dictionary, e.g.,
list = [bus, car, train, aeroplane, tram, ...]
Edit: Quick unrelated question:
Is pd_read_fwf the best way to read .txt files? If not, what else could I use?
Any help is very much appreciated. Thanks
Edit 2: Sample data and output that I want:
Sample:
The Horncastle boar's head is an early seventh-century Anglo-Saxon
ornament depicting a boar that probably was once part of the crest of
a helmet. It was discovered in 2002 by a metal detectorist searching
in the town of Horncastle, Lincolnshire. It was reported as found
treasure and acquired for £15,000 by the City and County Museum, where
it is on permanent display.
Required output - changes everything in uppercase to lowercase:
the horncastle boar's head is an early seventh-century anglo-saxon
ornament depicting a boar that probably was once part of the crest of
a helmet. it was discovered in 2002 by a metal detectorist searching
in the town of horncastle, lincolnshire. it was reported as found
treasure and acquired for £15,000 by the city and county museum, where
it is on permanent display.
You shouldn't need to use pandas or dictionaries at all. Just use Python's built-in open() function:
# Open a file in read mode with a context manager
with open(r'C:\path\to\you\file.txt', 'r') as file:
# Read the file into a string
text = file.read()
# Use the string's lower() method to make everything lowercase
text = text.lower()
print(text)
# Split text by whitespace into list of words
word_list = text.split()
# Get the number of elements in the list (the word count)
word_count = len(word_list)
print(word_count)
If you want, you can do it in the reverse order:
# Open a file in read mode with a context manager
with open(r'C:\path\to\you\file.txt', 'r') as file:
# Read the file into a string
text = file.read()
# Split text by whitespace into list of words
word_list = text.split()
# Use list comprehension to create a new list with the lower() method applied to each word.
lowercase_word_list = [word.lower() for word in word_list]
print(word_list)
Using a context manager for this is good since it automatically closes the file for you as soon as it goes out of scope (de-tabbed from with statement block). Otherwise you would have to use file.open() and file.read().
I think there are some other benefits to using context managers, but someone please correct me if I'm wrong.
I think what you are looking for is dictionary comprehension:
# Python 3
new_dict = {key: val.lower() for key, val in old_dict.items()}
# Python 2
new_dict = {key: val.lower() for key, val in old_dict.iteritems()}
items()/iteritems() gives you a list of tuples of the (keys, values) represented in the dictionary (e.g. [('somekey', 'SomeValue'), ('somekey2', 'SomeValue2')])
The comprehension iterates over each of these pairs, creating a new dictionary in the process. In the key: val.lower() section, you can do whatever manipulation you want to create the new dictionary.

How should I strip these tweets of words like "the" and "I"?

I'm trying to clean up a bunch of tweets so that they can be used for k-means clustering. I've written the following code that should strip each tweet of its unwanted characters.
from nltk.corpus import stopwords
import nltk
import json
with open("/Users/titus/Desktop/trumptweets.json",'r', encoding='utf8') as f:
data = json.loads(f.readline())
tweets = []
for sentence in data:
tokens = nltk.wordpunct_tokenize(sentence['text'])
type(tokens)
text = nltk.Text(tokens)
type(text)
words = [w.lower() for w in text if w.isalpha() and w not in
stopwords.words('english') and w is not 'the']
s = " "
useful_sentence = s.join(words)
tweets.append(useful_sentence)
print(tweets)
I'm trying to remove words like "I" and "the", but for some reason I can't figure out how. If I look at the tweets after they've gone through the loop, the word "the" still occurs.
Question: How is it possible that there are still occurences of "the" and "I" in the tweets? How should I fix this?
Beware of the processing order.
Here are two test strings for you:
THIS THE REMAINS.
this the is removed
Because "THE" is not "the". You lowercase after filtering, but you should first lowercase then filter.
The bad news for you: k-means works horribly bad on noisy short text like twitter. Because it is sensitive to noise, and the TFIDF vectors need very long texts to be reliable. So carefully verify your results, they probably are not as good as they may seem in the first enthusiasm.
Have you tried lowering w in check?
words = [w.lower() for w in text if w.isalpha() and w.lower() not in
stopwords.words('english') and w.lower() is not 'the']
is (and is not) is the (reference) identity check. It compares if two variable names point to the same object in memory. Typically this is only used to compare with None, or for some other speical cases.
In your case, use the != operator or the negation of == to compare with the string "the".
See also: Is there a difference between `==` and `is` in Python?

iterating through values in a dictionary to invert key and values

I am trying to invert an italian-english dictionary using the code that follows.
Some terms have one translation, while others have multiple possibilities. If an entry has multiple translations I iterate through each word, adding it to english-italian dict (if not already present).
If there is a single translation it should not iterate, but as I have written the code, it does. Also only the last translation in the term with multiple translations is added to the dictionary. I cannot figure out how to rewrite the code to resolve what should be a really simple task
from collections import defaultdict
def invertdict():
source_dict ={'paramezzale (s.m.)': ['hog', 'keelson', 'inner keel'], 'vento (s.m.)': 'wind'}
english_dict = defaultdict(list)
for parola, words in source_dict.items():
if len(words) > 1: # more than one translation ?
for word in words: # if true, iterate through each word
word = str(word).strip(' ')
print(word)
else: # only one translation, don't iterate!!
word = str(words).strip(' ')
print(word)
if word in english_dict.keys(): # check to see if the term already exists
if english_dict[word] != parola: # check that the italian is not present
#english_dict[word] = [english_dict[word], parola]
english_dict[word].append(parola).strip('')
else:
english_dict[word] = parola.strip(' ')
print(len(english_dict))
for key,value in english_dict.items():
print(key, value)
When this code is run, I get :
hog
keelson
inner keel
w
i
n
d
2
inner keel paramezzale (s.m.)
d vento (s.m.)
instead of
hog: paramezzale, keelson: paramezzale, inner keel: paramezzale, wind: vento
It would be easier to use lists everywhere in the dictionary, like:
source_dict = {'many translations': ['a', 'b'], 'one translation': ['c']}
Then you need 2 nested loops. Right now you're not always running the inner loop.
for italian_word, english_words in source_dict.items():
for english_word in english_words:
# print, add to english dict, etc.
If you can't change the source_dict format, you need to check the type explicitly. I would transform the single item in a list.
for italian_word, item in source_dict.items():
if not isinstance(item, list):
item = [item]
Full code:
source_dict ={'paramezzale (s.m.)': ['hog', 'keelson', 'inner keel'], 'vento (s.m.)': ['wind']}
english_dict = defaultdict(list)
for parola, words in source_dict.items():
for word in words:
word = str(word).strip(' ')
# add to the list if not already present
# english_dict is a defaultdict(list) so we can use .append directly
if parola not in english_dict[word]:
english_dict[word].append(parola)

Check if a set of characters is contained in a string?

There is a pool of letters (chosen randomly), and you want to make a word with these letters. I found some codes that can help me with this, but then if the word has for example 2 L's and the pool only 1, I'd like the program to know when this happens.
If I understand this correctly, you will also need a list of all valid words in whichever language you are using.
Assuming you have this, then one strategy for solving this problem could be to generate a key for every word in the dictionary that is a sorted list of the letters in that word. You could then group all words in the dictionary by these keys.
Then the task of finding out if a valid word can be constructed from a given list of random characters would be easy and fast.
Here is a simple implementation of what I am suggesting:
list_of_all_valid_words = ['this', 'pot', 'is', 'not', 'on', 'top']
def make_key(word):
return "".join(sorted(word))
lookup_dictionary = {}
for word in list_of_all_valid_words:
key = make_key(word)
lookup_dictionary[key] = lookup_dictionary.get(key, set()).union(set([word]))
def words_from_chars(s):
return list(lookup_dictionary.get(make_key(s), set()))
print words_from_chars('xyz')
print words_from_chars('htsi')
print words_from_chars('otp')
Output:
[]
['this']
['pot', 'top']

Resources