word2vec cosine similarity greater than 1 arabic text - nlp

I have trained my word2vec model from gensim and I am getting the nearest neighbors for some words in the corpus. Here are the similarity scores:
top neighbors for الاحتلال:
الاحتلال: 1.0000001192092896
الاختلال: 0.9541053175926208
الاهتلال: 0.872565507888794
الاحثلال: 0.8386293649673462
الاكتلال: 0.8209128379821777
It is odd to get a similarity greater than 1. I cannot apply any stemming to my text because the text includes many OCR spelling mistakes (I got the text from ORC-ed documents). How can I fix the issue ?
Note I am using model.similarity(t1, t2)
This is how I trained my Word2Vec Model:
documents = list()
tokenize = lambda x: gensim.utils.simple_preprocess(x)
t1 = time.time()
docs = read_files(TEXT_DIRS, nb_docs=5000)
t2 = time.time()
print('Reading docs took: {:.3f} mins'.format((t2 - t1) / 60))
print('Number of documents: %i' % len(docs))
# Training the model
model = gensim.models.Word2Vec(docs, size=EMBEDDING_SIZE, min_count=5)
if not os.path.exists(MODEL_DIR):
os.makedirs(MODEL_DIR)
model.save(os.path.join(MODEL_DIR, 'word2vec'))
weights = model.wv.vectors
index_words = model.wv.index2word
vocab_size = weights.shape[0]
embedding_dim = weights.shape[1]
print('Shape of weights:', weights.shape)
print('Vocabulary size: %i' % vocab_size)
print('Embedding size: %i' % embedding_dim)
Below is the read_files function I defined:
def read_files(text_directories, nb_docs):
"""
Read in text files
"""
documents = list()
tokenize = lambda x: gensim.utils.simple_preprocess(x)
print('started reading ...')
for path in text_directories:
count = 0
# Read in all files in directory
if os.path.isdir(path):
all_files = os.listdir(path)
for filename in all_files:
if filename.endswith('.txt') and filename[0].isdigit():
count += 1
with open('%s/%s' % (path, filename), encoding='utf-8') as f:
doc = f.read()
doc = clean_text_arabic_style(doc)
doc = clean_doc(doc)
documents.append(tokenize(doc))
if count % 100 == 0:
print('processed {} files so far from {}'.format(count, path))
if count >= nb_docs and count <= nb_docs + 200:
print('REACHED END')
break
if count >= nb_docs and count <= nb_docs:
print('REACHED END')
break
return documents
I tried this thread but it won't help me because I rather have arabic and misspelled text
Update
I tried the following: (getting the similarity between the exact same word)
print(model.similarity('الاحتلال','الاحتلال'))
and it gave me the following result:
1.0000001

Definitionally, the cosine-similarity measure should max at 1.0.
But in practice, floating-point number representations in computers have tiny imprecisions in the deep-decimals. And, especially when a number of calculations happen in a row (as with the calculation of this cosine-distance), those will sometimes lead to slight deviations from what the expected maximum or exactly-right answer "should" be.
(Similarly: sometimes calculations that, mathematically, should result in the exact same answer no matter how they are reordered/regrouped deviate slightly when done in different orders.)
But, as these representational errors are typically "very small", they're usually not of practical concern. (They are especially small in the range of numbers around -1.0 to 1.0, but can become quite large when dealing with giant numbers.)
In your original case, the deviation is just 0.000000119209289. In the word-to-itself case, the deviation is just 0.0000001. That is, about one-ten-millionth off. (Your other sub-1.0 values have similar tiny deviations from perfect calculation, but they aren't noticeable.)
In most cases, you should just ignore it.
If you find it distracting to you or your users in numerical displays/logging, simply choosing to display all such values to a limited number of after-the-decimal-point digits – say 4 or even 5 or 6 – will hide those noisy digits. For example, using a Python 3 format-string:
sim = model.similarity('الاحتلال','الاحتلال')
print(f"{sim:.6}")
(Libraries like numpy that work with large arrays of such floats can even set a global default for display precision – see numpy.set_print_options – though that shouldn't affect the raw Python floats you're examining.)
If for some reason you absolutely need the values to be capped at 1.0, you could add extra code to do that. But, it's usually a better idea to choose your tests & printouts to be robust to, & oblivious with regard to, such tiny deviations from perfect math.

Related

Word2Vec Subsampling -- Implementation

I am implementing the Skipgram model, both in Pytorch and Tensorflow2. I am having doubts about the implementation of subsampling of frequent words. Verbatim from the paper, the probability of subsampling word wi is computed as
where t is a custom threshold (usually, a small value such as 0.0001) and f is the frequency of the word in the document. Although the authors implemented it in a different, but almost equivalent way, let's stick with this definition.
When computing the P(wi), we can end up with negative values. For example, assume we have 100 words, and one of them appears extremely more often than others (as it is the case for my dataset).
import numpy as np
import seaborn as sns
np.random.seed(12345)
# generate counts in [1, 20]
counts = np.random.randint(low=1, high=20, size=99)
# add an extremely bigger count
counts = np.insert(counts, 0, 100000)
# compute frequencies
f = counts/counts.sum()
# define threshold as in paper
t = 0.0001
# compute probabilities as in paper
probs = 1 - np.sqrt(t/f)
sns.distplot(probs);
Q: What is the correct way to implement subsampling using this "probability"?
As an additional info, I have seen that in keras the function keras.preprocessing.sequence.make_sampling_table takes a different approach:
def make_sampling_table(size, sampling_factor=1e-5):
"""Generates a word rank-based probabilistic sampling table.
Used for generating the `sampling_table` argument for `skipgrams`.
`sampling_table[i]` is the probability of sampling
the i-th most common word in a dataset
(more common words should be sampled less frequently, for balance).
The sampling probabilities are generated according
to the sampling distribution used in word2vec:
```
p(word) = (min(1, sqrt(word_frequency / sampling_factor) /
(word_frequency / sampling_factor)))
```
We assume that the word frequencies follow Zipf's law (s=1) to derive
a numerical approximation of frequency(rank):
`frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))`
where `gamma` is the Euler-Mascheroni constant.
# Arguments
size: Int, number of possible words to sample.
sampling_factor: The sampling factor in the word2vec formula.
# Returns
A 1D Numpy array of length `size` where the ith entry
is the probability that a word of rank i should be sampled.
"""
gamma = 0.577
rank = np.arange(size)
rank[0] = 1
inv_fq = rank * (np.log(rank) + gamma) + 0.5 - 1. / (12. * rank)
f = sampling_factor * inv_fq
return np.minimum(1., f / np.sqrt(f))
I tend to trust deployed code more than paper write-ups, especially in a case like word2vec, where the original authors' word2vec.c code released by the paper's authors has been widely used & served as the template for other implementations. If we look at its subsampling mechanism...
if (sample > 0) {
real ran = (sqrt(vocab[word].cn / (sample * train_words)) + 1) * (sample * train_words) / vocab[word].cn;
next_random = next_random * (unsigned long long)25214903917 + 11;
if (ran < (next_random & 0xFFFF) / (real)65536) continue;
}
...we see that those words with tiny counts (.cn) that could give negative values in the original formula instead here give values greater-than 1.0, and thus can never be less than the long-random-masked-and-scaled to never be more than 1.0 ((next_random & 0xFFFF) / (real)65536). So, it seems the authors' intent was for all negative-values of the original formula to mean "never discard".
As per the keras make_sampling_table() comment & implementation, they're not consulting the actual word-frequencies at all. Instead, they're assuming a Zipf-like distribution based on word-rank order to synthesize a simulated word-frequency.
If their assumptions were to hold – the related words are from a natural-language corpus with a Zipf-like frequency-distribution – then I'd expect their sampling probabilities to be close to down-sampling probabilities that would have been calculated from true frequency information. And that's probably "close enough" for most purposes.
I'm not sure why they chose this approximation. Perhaps other aspects of their usual processes have not maintained true frequencies through to this step, and they're expecting to always be working with natural-language texts, where the assumed frequencies will be generally true.
(As luck would have it, and because people often want to impute frequencies to public sets of word-vectors which have dropped the true counts but are still sorted from most- to least-frequent, just a few days ago I wrote an answer about simulating a fake-but-plausible distribution using Zipf's law – similar to what this keras code is doing.)
But, if you're working with data that doesn't match their assumptions (as with your synthetic or described datasets), their sampling-probabilities will be quite different than what you would calculate yourself, with any form of the original formula that uses true word frequencies.
In particular, imagine a distribution with one token a million times, then a hundred tokens all appearing just 10 times each. Those hundred tokens' order in the "rank" list is arbitrary – truly, they're all tied in frequency. But the simulation-based approach, by fitting a Zipfian distribution on that ordering, will in fact be sampling each of them very differently. The one 10-occurrence word lucky enough to be in the 2nd rank position will be far more downsampled, as if it were far more frequent. And the 1st-rank "tall head" value, by having its true frequency *under-*approximated, will be less down-sampled than otherwise. Neither of those effects seem beneficial, or in the spirit of the frequent-word-downsampling option - which should only "thin out" very-frequent words, and in all cases leave words of the same frequency as each other in the original corpus roughly equivalently present to each other in the down-sampled corpus.
So for your case, I would go with the original formula (probability-of-discarding-that-requires-special-handling-of-negative-values), or the word2vec.c practical/inverted implementation (probability-of-keeping-that-saturates-at-1.0), rather than the keras-style approximation.
(As a totally-separate note that nonetheless may be relevant for your dataset/purposes, if you're using negative-sampling: there's another parameter controlling the relative sampling of negative examples, often fixed at 0.75 in early implementations, that one paper has suggested can usefully vary for non-natural-language token distributions & recommendation-related end-uses. This parameter is named ns_exponent in the Python gensim implementation, but simply a fixed power value internal to a sampling-table pre-calculation in the original word2vec.c code.)

LSTM getting caught up in loop

I recently implemented a name generating RNN "from scratch" which was doing ok but far from perfect. So I thought about trying my luck with pytorch's LSTM class to see if it makes a difference. Indeed it does and the outpus looks way better for the first 7 ~ 8 characters. But then the networks gets caught in a loop and outputs things like "laulaulaulau" or "rourourourou" (it is supposed the generate french names).
Is it a often occuring problem ? If so do you know a way to fix it ? I'm concern about the fact the network doesn't produce EOS tokens...
This is an issue which has already been asked here Why does my keras LSTM model get stuck in an infinite loop?
but not really answered hence my post.
here is the model :
class pytorchLSTM(nn.Module):
def __init__(self,input_size,hidden_size):
super(pytorchLSTM,self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.lstm = nn.LSTM(input_size, hidden_size)
self.output_layer = nn.Linear(hidden_size,input_size)
self.tanh = nn.Tanh()
self.softmax = nn.LogSoftmax(dim = 2)
def forward(self, input, hidden)
out, hidden = self.lstm(input,hidden)
out = self.tanh(out)
out = self.output_layer(out)
out = self.softmax(out)
return out, hidden
The input and target are two sequences of one-hot encoded vectors respectively with a start of sequence and end of sequence vector at the start and the end. They represent the characters inside of a name taken from the name list (database).
I use a and token on each name from the database. here are the function I use
def inputTensor(line):
#tensor starts with <start of sequence> token.
tensor = torch.zeros(len(line)+1, 1, n_letters)
tensor[0][0][n_letters - 2] = 1
for li in range(len(line)):
letter = line[li]
tensor[li+1][0][all_letters.find(letter)] = 1
return tensor
# LongTensor of second letter to end (EOS) for target
def targetTensor(line):
letter_indexes = [all_letters.find(line[li]) for li in range(len(line))]
letter_indexes.append(n_letters - 1) # EOS
return torch.LongTensor(letter_indexes)
training loop :
def train_lstm(model):
start = time.time()
criterion = nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters())
n_iters = 20000
print_every = 1000
plot_every = 500
all_losses = []
total_loss = 0
for iter in range(1,n_iters+1):
line = randomChoice(category_line)
input_line_tensor = inputTensor(line)
target_line_tensor = targetTensor(line).unsqueeze(-1)
optimizer.zero_grad()
loss = 0
output, hidden = model(input_line_tensor)
for i in range(input_line_tensor.size(0)):
l = criterion(output[i], target_line_tensor[i])
loss += l
loss.backward()
optimizer.step()
the sampling function :
def sample():
max_length = 20
input = torch.zeros(1,1,n_letters)
input[0][0][n_letters - 2] = 1
output_name = ""
hidden = (torch.zeros(2,1,lstm.hidden_size),torch.zeros(2,1,lstm.hidden_size))
for i in range(max_length):
output, hidden = lstm(input)
output = output[-1][:][:]
l = torch.multinomial(torch.exp(output[0]),num_samples = 1).item()
if l == n_letters - 1:
break
else:
letter = all_letters[l]
output_name += letter
input = inputTensor(letter)
return output_name
The typical sampled output looks something like that :
Laurayeerauerararauo
Leayealouododauodouo
Courouauurourourodau
Do you know how I can improve that ?
I found the explanation :
When using instances of the LSTM class as part of a RNN, the default input dimensions are (seq_length,batch_dim,input_size). To be able to interpret the output of the lstm as a probability (over the set of inputs) I needed to pass it to a Linear layer before the Softmax call, which is where the problem happens : Linear instances expects the input to be in the format (batch_dim,seq_length,input_size).
To fix this, one needs to pass batch_first = True as an argument to the LSTM upon creation, and then feed the RNN with an input of the form (batch_dim, seq_length, input_size).
Some tips to improve the network in the order of importance (and ease of implementing):
1. Training data
If you want your generated samples to look real, you have to give some real data to the network. Find a set of names, split those into letters and transform into indices. This step alone would give way more realistic names.
2. Separate start and end tokens.
I would go with <SON> (Start Of Name) and <EON> (End Of Name). In this configuration neural network can learn combinations of letters leading to <EON> and combinations of letters coming after <SON>. ATM it's trying to fit two different concepts into this one custom token.
3. Unsupervised Pretaining
You may want to give your letters some semantic meaning instead of one-hot encoded vectors, check word2vec for basic approach.
Basically, each letter would be represented by N-dimensional vector (say 50 dimensions) and would be closer in space if the letter occurs more often next to another letter (a closer to k than x).
Simple way to implement that would be taking some text dataset and trying to predict next letter at each timestep. Each letter would be represented by random vector at the beginning, through backpropagation letter representations would be updated to reflect their similarity.
Check pytorch embedding tutorial for more info.
4. Different architecture
You may want to check Andrej Karpathy's idea for generating baby names. It is simply described here.
Essentially, after training, you feed your model with random letters (say 10) and tell it to predict the next letter.
You remove last letter from random seed and put the predicted one in it's place. Iterate until <EON> is outputted.

make a confusion matrix for a classifier with 2 classes

I have a file with some sentences (A Persian sentence, a tab, a Persian word (tag), a tab, an English word (tag)). The English words show the class of each sentence. There are 2 classes in this file, "passion" and "salty". I classified the sentences with naive bayes algorithm and now I have to calculate precision and recall. For that I have to make a confusion matrix but I don't know how. I wrote a small code and assumed that "passion" is the positive group and "salty" is the negative group. The code returned the output for this case. But if I assume "salty" as positive and "passion" as negative, the numbers are totally different from the first case, and consequently when I want to calculate precision and recall, I don't have the correct answer. Should I calculate tp, tn, fp and fn separately for the 2 classes (once for passion and once for salty) and then calculate the average and then calculate precision and recall according to this average?
(hint1: argmax is the output of the NB algorithm and it is the tag that the code recognized it for the test sentences.
hint2: I have some other files with more than 2 classes, too)
#t = line.strip().split("\t")
if t[2] == "passion" and argmax == "passion":
tp += 1
elif t[2] == "passion" and argmax != "passion":
fn += 1
elif t[2] == "salty" and argmax != "salty":
fp += 1
elif t[2] == "salty" and argmax == "salty":
tn += 1
print ("tp", tp, "tn", tn, "fp", fp, "fn", fn)
You should use scikit-learn, which already provides confusion matrix and classification reports. A sample:
from sklearn.metrics import confusion_matrix, classification_report
# suppose your predictions are stored in a variable called preds
# and the true values are stored in a variable called y
print(confusion_matrix(y, preds))
print(classification_report(y, preds))
(btw, scikit-learn is intended to be used with python 2.7, but it is probably safe to use these functions, since you already have the model built).
Also, since I see you are in the NLP domain, you could use the facilities that the nltk library provides. I'm not an expert, but I suppose this should be useful.

Statistical Analysis Error? python 3 proof read please

The code below generates two random integers within range specified by argv, tests if the integers match and starts again. At the end it prints some stats about the process.
I've noticed though that increasing the value of argv reduces the percentage of tested possibilities exponentially.
This seems counter intuitive to me so my question is, is this an error in the code or are the numbers real and if so then what am I not thinking about?
#!/usr/bin/python3
import sys
import random
x = int(sys.argv[1])
a = random.randint(0,x)
b = random.randint(0,x)
steps = 1
combos = x**2
while a != b:
a = random.randint(0,x)
b = random.randint(0,x)
steps += 1
percent = (steps / combos) * 100
print()
print()
print('[{} ! {}]'.format(a,b), end=' ')
print('equality!'.upper())
print('steps'.upper(), steps)
print('possble combinations = {}'.format(combos))
print('explored {}% possibilitys'.format(percent))
Thanks
EDIT
For example:
./runscrypt.py 100000
will returm me something like:
[65697 ! 65697] EQUALITY!
STEPS 115867
possble combinations = 10000000000
explored 0.00115867% possibilitys
"explored 0.00115867% possibilitys" <-- This number is too low?
This experiment is really a geometric distribution.
Ie.
Let Y be the random variable of the number of iterations before a match is seen. Then Y is geometrically distributed with parameter 1/x (the probability of generating two matching integers).
The expected value, E[Y] = 1/p where p is the mentioned probability (the proof of this can be found in the link above). So in your case the expected number of iterations is 1/(1/x) = x.
The number of combinations is x^2.
So the expected percentage of explored possibilities is really x/(x^2) = 1/x.
As x approaches infinity, this number approaches 0.
In the case of x=100000, the expected percentage of explored possibilities = 1/100000 = 0.001% which is very close to your numerical result.

How to avoid impression bias when calculate the ctr?

When we train a ctr(click through rate) model, sometimes we need calcute the real ctr from the history data, like this
#(click)
ctr = ----------------
#(impressions)
We know that, if the number of impressions is too small, the calculted ctr is not real. So we always set a threshold to filter out the large enough impressions.
But we know that the higher impressions, the higher confidence for the ctr. Then my question is that: Is there a impressions-normalized statistic method to calculate the ctr?
Thanks!
You probably need a representation of confidence interval for your estimated ctr. Wilson score interval is a good one to try.
You need below stats to calculate the confidence score:
\hat p is the observed ctr (fraction of #clicked vs #impressions)
n is the total number of impressions
zα/2 is the (1-α/2) quantile of the standard normal distribution
A simple implementation in python is shown below, I use z(1-α/2)=1.96 which corresponds to a 95% confidence interval. I attached 3 test results at the end of the code.
# clicks # impressions # conf interval
2 10 (0.07, 0.45)
20 100 (0.14, 0.27)
200 1000 (0.18, 0.22)
Now you can set up some threshold to use the calculated confidence interval.
from math import sqrt
def confidence(clicks, impressions):
n = impressions
if n == 0: return 0
z = 1.96 #1.96 -> 95% confidence
phat = float(clicks) / n
denorm = 1. + (z*z/n)
enum1 = phat + z*z/(2*n)
enum2 = z * sqrt(phat*(1-phat)/n + z*z/(4*n*n))
return (enum1-enum2)/denorm, (enum1+enum2)/denorm
def wilson(clicks, impressions):
if impressions == 0:
return 0
else:
return confidence(clicks, impressions)
if __name__ == '__main__':
print wilson(2,10)
print wilson(20,100)
print wilson(200,1000)
"""
--------------------
results:
(0.07048879557839793, 0.4518041980521754)
(0.14384999046998084, 0.27112660859398174)
(0.1805388068716823, 0.22099327100894336)
"""
If you treat this as a binomial parameter, you can do Bayesian estimation. If your prior on ctr is uniform (a Beta distribution with parameters (1,1)) then your posterior is Beta(1+#click, 1+#impressions-#click). Your posterior mean is #click+1 / #impressions+2 if you want a single summary statistic of this posterior, but you probably don't, and here's why:
I don't know what your method for determining whether ctr is high enough, but let's say you're interested in everything with ctr > 0.9. You can then use the cumulative density function of the beta distribution to look at what proportion of probability mass is over the 0.9 threshold (this will just be 1 - the cdf at 0.9). In this way, your threshold will naturally incorporate uncertainty about the estimate because of limited sample size.
There are many ways to calculate this confidence interval. An alternative to the Wilson Score is the Clopper-Perrson interval, which I found useful in spreadsheets.
Upper Bound Equation
Lower Bound Equation
Where
B() is the the Inverse Beta Distribution
alpha is the confidence level error (e.g for 95% confidence-level, alpha is 5%)
n is the number of samples (e.g. impressions)
x is the number of successes (e.g. clicks)
In Excel an implementation for B() is provided by the BETA.INV formula.
There is no equivalent formula for B() in Google Sheets, but a Google Apps Script custom function can be adapted from the JavaScript Statistical Library (e.g search github for jstat)

Resources