Machine Learning - Classification algorithm - statistics

I want to find the following probability:
P(y=1/n=k; thetha)
Read as:
Probability, The prediction is class 1 given number of words = k, parametrized by thetha
A traditional classification doesn't have the conditional probability (right)
P(y = 1; thetha)
How do I solve this?
EDIT:
For example, lets say I want to predict whether an email is spam or not based on the number of attachments.
Let y=1 indicate spam and y=0 be non-spam.
So,
P(y = 1/num_attachements=0; some attributes)
and so on!!
Is it making any sense?

Normally number of attachments is just another attribute, so your probability is the same as
P(y = 1 | all attributes)
However, if you have some special treatment of attachment (say, other attributes are numeric and attachment is boolean) you can compute them separately and then combine as:
P(C|A, B) = P(C|A) * P(C|B) / P(C)
where C stands for event y = 1, A - for attachments and B for other attributes.
See this paper for description of several Nave Bayes classifiers.

Use a Naive Baisean classifier. You can code one yourself quite quickly or use/look at the nltk library.

Related

Multiclass semantic segmentation model evaluation

I am doing a project on multiclass semantic segmentation. I have formulated a model that outputs pretty descent segmented images by decreasing the loss value. However, I cannot evaluate the model performance in metrics, such as meanIoU or Dice coefficient.
In case of binary semantic segmentation it was easy just to set the threshold of 0.5, to classify the outputs as an object or background, but it does not work in the case of multiclass semantic segmentation. Could you please tell me how to obtain model performance on the aforementioned metrics? Any help will be highly appreciated!
By the way, I am using PyTorch framework and CamVid dataset.
If anyone is interested in this answer, please also look at this issue. The author of the issue points out that mIoU can be computed in a different way (and that method is more accepted in literature). So, consider that before using the implementation for any formal publication.
Basically, the other method suggested by the issue-poster is to separately accumulate the intersections and unions over the entire dataset and divide them at the final step. The method in the below original answer computes intersection and union for a batch of images, then divides them to get IoU for the current batch, and then takes a mean of the IoUs over the entire dataset.
However, this below given original method is problematic because the final mean IoU would vary with the batch-size. On the other hand, the mIoU would not vary with the batch size for the method mentioned in the issue as the separate accumulation would ensure that batch size is irrelevant (though higher batch size can definitely help speed up the evaluation).
Original answer:
Given below is an implementation of mean IoU (Intersection over Union) in PyTorch.
def mIOU(label, pred, num_classes=19):
pred = F.softmax(pred, dim=1)
pred = torch.argmax(pred, dim=1).squeeze(1)
iou_list = list()
present_iou_list = list()
pred = pred.view(-1)
label = label.view(-1)
# Note: Following for loop goes from 0 to (num_classes-1)
# and ignore_index is num_classes, thus ignore_index is
# not considered in computation of IoU.
for sem_class in range(num_classes):
pred_inds = (pred == sem_class)
target_inds = (label == sem_class)
if target_inds.long().sum().item() == 0:
iou_now = float('nan')
else:
intersection_now = (pred_inds[target_inds]).long().sum().item()
union_now = pred_inds.long().sum().item() + target_inds.long().sum().item() - intersection_now
iou_now = float(intersection_now) / float(union_now)
present_iou_list.append(iou_now)
iou_list.append(iou_now)
return np.mean(present_iou_list)
Prediction of your model will be in one-hot form, so first take softmax (if your model doesn't already) followed by argmax to get the index with the highest probability at each pixel. Then, we calculate IoU for each class (and take the mean over it at the end).
We can reshape both the prediction and the label as 1-D vectors (I read that it makes the computation faster). For each class, we first identify the indices of that class using pred_inds = (pred == sem_class) and target_inds = (label == sem_class). The resulting pred_inds and target_inds will have 1 at pixels labelled as that particular class while 0 for any other class.
Then, there is a possibility that the target does not contain that particular class at all. This will make that class's IoU calculation invalid as it is not present in the target. So, you assign such classes a NaN IoU (so you can identify them later) and not involve them in the calculation of the mean.
If the particular class is present in the target, then pred_inds[target_inds] will give a vector of 1s and 0s where indices with 1 are those where prediction and target are equal and zero otherwise. Taking the sum of all elements of this will give us the intersection.
If we add all the elements of pred_inds and target_inds, we'll get the union + intersection of pixels of that particular class. So, we subtract the already calculated intersection to get the union. Then, we can divide the intersection and union to get the IoU of that particular class and add it to a list of valid IoUs.
At the end, you take the mean of the entire list to get the mIoU. If you want the Dice Coefficient, you can calculate it in a similar fashion.

Word2Vec Subsampling -- Implementation

I am implementing the Skipgram model, both in Pytorch and Tensorflow2. I am having doubts about the implementation of subsampling of frequent words. Verbatim from the paper, the probability of subsampling word wi is computed as
where t is a custom threshold (usually, a small value such as 0.0001) and f is the frequency of the word in the document. Although the authors implemented it in a different, but almost equivalent way, let's stick with this definition.
When computing the P(wi), we can end up with negative values. For example, assume we have 100 words, and one of them appears extremely more often than others (as it is the case for my dataset).
import numpy as np
import seaborn as sns
np.random.seed(12345)
# generate counts in [1, 20]
counts = np.random.randint(low=1, high=20, size=99)
# add an extremely bigger count
counts = np.insert(counts, 0, 100000)
# compute frequencies
f = counts/counts.sum()
# define threshold as in paper
t = 0.0001
# compute probabilities as in paper
probs = 1 - np.sqrt(t/f)
sns.distplot(probs);
Q: What is the correct way to implement subsampling using this "probability"?
As an additional info, I have seen that in keras the function keras.preprocessing.sequence.make_sampling_table takes a different approach:
def make_sampling_table(size, sampling_factor=1e-5):
"""Generates a word rank-based probabilistic sampling table.
Used for generating the `sampling_table` argument for `skipgrams`.
`sampling_table[i]` is the probability of sampling
the i-th most common word in a dataset
(more common words should be sampled less frequently, for balance).
The sampling probabilities are generated according
to the sampling distribution used in word2vec:
```
p(word) = (min(1, sqrt(word_frequency / sampling_factor) /
(word_frequency / sampling_factor)))
```
We assume that the word frequencies follow Zipf's law (s=1) to derive
a numerical approximation of frequency(rank):
`frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))`
where `gamma` is the Euler-Mascheroni constant.
# Arguments
size: Int, number of possible words to sample.
sampling_factor: The sampling factor in the word2vec formula.
# Returns
A 1D Numpy array of length `size` where the ith entry
is the probability that a word of rank i should be sampled.
"""
gamma = 0.577
rank = np.arange(size)
rank[0] = 1
inv_fq = rank * (np.log(rank) + gamma) + 0.5 - 1. / (12. * rank)
f = sampling_factor * inv_fq
return np.minimum(1., f / np.sqrt(f))
I tend to trust deployed code more than paper write-ups, especially in a case like word2vec, where the original authors' word2vec.c code released by the paper's authors has been widely used & served as the template for other implementations. If we look at its subsampling mechanism...
if (sample > 0) {
real ran = (sqrt(vocab[word].cn / (sample * train_words)) + 1) * (sample * train_words) / vocab[word].cn;
next_random = next_random * (unsigned long long)25214903917 + 11;
if (ran < (next_random & 0xFFFF) / (real)65536) continue;
}
...we see that those words with tiny counts (.cn) that could give negative values in the original formula instead here give values greater-than 1.0, and thus can never be less than the long-random-masked-and-scaled to never be more than 1.0 ((next_random & 0xFFFF) / (real)65536). So, it seems the authors' intent was for all negative-values of the original formula to mean "never discard".
As per the keras make_sampling_table() comment & implementation, they're not consulting the actual word-frequencies at all. Instead, they're assuming a Zipf-like distribution based on word-rank order to synthesize a simulated word-frequency.
If their assumptions were to hold – the related words are from a natural-language corpus with a Zipf-like frequency-distribution – then I'd expect their sampling probabilities to be close to down-sampling probabilities that would have been calculated from true frequency information. And that's probably "close enough" for most purposes.
I'm not sure why they chose this approximation. Perhaps other aspects of their usual processes have not maintained true frequencies through to this step, and they're expecting to always be working with natural-language texts, where the assumed frequencies will be generally true.
(As luck would have it, and because people often want to impute frequencies to public sets of word-vectors which have dropped the true counts but are still sorted from most- to least-frequent, just a few days ago I wrote an answer about simulating a fake-but-plausible distribution using Zipf's law – similar to what this keras code is doing.)
But, if you're working with data that doesn't match their assumptions (as with your synthetic or described datasets), their sampling-probabilities will be quite different than what you would calculate yourself, with any form of the original formula that uses true word frequencies.
In particular, imagine a distribution with one token a million times, then a hundred tokens all appearing just 10 times each. Those hundred tokens' order in the "rank" list is arbitrary – truly, they're all tied in frequency. But the simulation-based approach, by fitting a Zipfian distribution on that ordering, will in fact be sampling each of them very differently. The one 10-occurrence word lucky enough to be in the 2nd rank position will be far more downsampled, as if it were far more frequent. And the 1st-rank "tall head" value, by having its true frequency *under-*approximated, will be less down-sampled than otherwise. Neither of those effects seem beneficial, or in the spirit of the frequent-word-downsampling option - which should only "thin out" very-frequent words, and in all cases leave words of the same frequency as each other in the original corpus roughly equivalently present to each other in the down-sampled corpus.
So for your case, I would go with the original formula (probability-of-discarding-that-requires-special-handling-of-negative-values), or the word2vec.c practical/inverted implementation (probability-of-keeping-that-saturates-at-1.0), rather than the keras-style approximation.
(As a totally-separate note that nonetheless may be relevant for your dataset/purposes, if you're using negative-sampling: there's another parameter controlling the relative sampling of negative examples, often fixed at 0.75 in early implementations, that one paper has suggested can usefully vary for non-natural-language token distributions & recommendation-related end-uses. This parameter is named ns_exponent in the Python gensim implementation, but simply a fixed power value internal to a sampling-table pre-calculation in the original word2vec.c code.)

How do I classify using SVM Classifier?

I'm doing a project in liver tumor classification. Actually I initially used Region Growing method for liver segmentation and from that I segmented tumor using FCM.
I,then, obtained the texture features using Gray Level Co-occurence Matrix. My output for that was
stats =
autoc: [1.857855266614132e+000 1.857955341199538e+000]
contr: [5.103143332457753e-002 5.030548650257343e-002]
corrm: [9.512661919561399e-001 9.519459060378332e-001]
corrp: [9.512661919561385e-001 9.519459060378338e-001]
cprom: [7.885631654779597e+001 7.905268525471267e+001]
Now how should I give this as an input to the SVM program.
function [itr] = multisvm( T,C,tst )
%MULTISVM(2.0) classifies the class of given training vector according to the
% given group and gives us result that which class it belongs.
% We have also to input the testing matrix
%Inputs: T=Training Matrix, C=Group, tst=Testing matrix
%Outputs: itr=Resultant class(Group,USE ROW VECTOR MATRIX) to which tst set belongs
%----------------------------------------------------------------------%
% IMPORTANT: DON'T USE THIS PROGRAM FOR CLASS LESS THAN 3, %
% OTHERWISE USE svmtrain,svmclassify DIRECTLY or %
% add an else condition also for that case in this program. %
% Modify required data to use Kernel Functions and Plot also%
%----------------------------------------------------------------------%
% Date:11-08-2011(DD-MM-YYYY) %
% This function for multiclass Support Vector Machine is written by
% ANAND MISHRA (Machine Vision Lab. CEERI, Pilani, India)
% and this is free to use. email: anand.mishra2k88#gmail.com
% Updated version 2.0 Date:14-10-2011(DD-MM-YYYY)
u=unique(C);
N=length(u);
c4=[];
c3=[];
j=1;
k=1;
if(N>2)
itr=1;
classes=0;
cond=max(C)-min(C);
while((classes~=1)&&(itr<=length(u))&& size(C,2)>1 && cond>0)
%This while loop is the multiclass SVM Trick
c1=(C==u(itr));
newClass=c1;
svmStruct = svmtrain(T,newClass);
classes = svmclassify(svmStruct,tst);
% This is the loop for Reduction of Training Set
for i=1:size(newClass,2)
if newClass(1,i)==0;
c3(k,:)=T(i,:);
k=k+1;
end
end
T=c3;
c3=[];
k=1;
% This is the loop for reduction of group
for i=1:size(newClass,2)
if newClass(1,i)==0;
c4(1,j)=C(1,i);
j=j+1;
end
end
C=c4;
c4=[];
j=1;
cond=max(C)-min(C); % Condition for avoiding group
%to contain similar type of values
%and the reduce them to process
% This condition can select the particular value of iteration
% base on classes
if classes~=1
itr=itr+1;
end
end
end
end
Kindly guide me.
Images:
You have to take all the feature values you get and concatenate them into a feature vector. Then for the SVM the features should be normalized so that the values in each dimension vary between -1 and 1, if I remember correctly. I think libsvm has a function for doing the normalization.
So assuming your feature vector ends up having N dimensions, and you have M training instances, your training set should be an M x N matrix. Then if you have P test instances, your test set should be a P x N matrix.
May I also suggest you a very popular implementation of SVM, called SVMLight http://svmlight.joachims.org/.
You can find examples on the website on how to use it. Mex-matlav wrapper for it is also available.
As pointed out by Dima, you need to concatenate the features?
btw can you tell me which dataset are you using for liver-tumor-classification?
Is it publicly available for download?

Ways to calculate similarity

I am doing a community website that requires me to calculate the similarity between any two users. Each user is described with the following attributes:
age, skin type (oily, dry), hair type (long, short, medium), lifestyle (active outdoor lover, TV junky) and others.
Can anyone tell me how to go about this problem or point me to some resources?
Another way of computing (in R) all the pairwise dissimilarities (distances) between observations in the data set. The original variables may be of mixed types. The handling of nominal, ordinal, and (a)symmetric binary data is achieved by using the general dissimilarity coefficient of Gower (Gower, J. C. (1971) A general coefficient of similarity and some of its properties, Biometrics 27, 857–874). For more check out this on page 47. If x contains any columns of these data-types, Gower's coefficient will be used as the metric.
For example
x1 <- factor(c(10, 12, 25, 14, 29))
x2 <- factor(c("oily", "dry", "dry", "dry", "oily"))
x3 <- factor(c("medium", "short", "medium", "medium", "long"))
x4 <- factor(c("active outdoor lover", "TV junky", "TV junky", "active outdoor lover", "TV junky"))
x <- cbind(x1,x2,x3,x4)
library(cluster)
daisy(x, metric = "euclidean")
you'll get :
Dissimilarities :
1 2 3 4
2 2.000000
3 3.316625 2.236068
4 2.236068 1.732051 1.414214
5 4.242641 3.741657 1.732051 2.645751
If you are interested on a method for dimensionality reduction for categorical data (also a way to arrange variables into homogeneous clusters) check this
Give each attribute an appropriate weight, and add the differences between values.
enum SkinType
Dry, Medium, Oily
enum HairLength
Bald, Short, Medium, Long
UserDifference(user1, user2)
total := 0
total += abs(user1.Age - user2.Age) * 0.1
total += abs((int)user1.Skin - (int)user2.Skin) * 0.5
total += abs((int)user1.Hair - (int)user2.Hair) * 0.8
# etc...
return total
If you really need similarity instead of difference, use 1 / UserDifference(a, b)
You probably should take a look for
Data Mining and Data Warehousing (Essential)
Machine Learning (Extra)
Artificial Neural Networks (Especially SOM)
Pattern Recognition (Related)
These topics will let you your program recognize similarities and clusters in your users collection and try to adapt to them...
You can then know different hidden common groups of related users... (i.e users with green hair usually do not like watching TV..)
As an advice, try to use ready implemented tools for this feature instead of implementing it yourself...
Take a look at Open Directory Data Mining Projects
Three steps to achieve a simple subjective metric for difference between two datapoints that might work fine in your case:
Capture all your variables in a representative numeric variable, for example: skin type (oily=-1, dry=1), hair type (long=2, short=0, medium=1),lifestyle (active outdoor lover=1, TV junky=-1), age is a number.
Scale all numeric ranges so that they fit the relative importance you give them for indicating difference. For example: An age difference of 10 years is about as different as the difference between long and medium hair, and the difference between oily and dry skin. So 10 on the age scale is as different as 1 on the hair scale is as different as 2 on the skin scale, so scale the difference in age by 0.1, that in hair by 1 and and that in skin by 0.5
Use an appropriate distance metric to combine the differences between two people on the various scales in one overal difference. The smaller this number, the more similar they are. I'd suggest simple quadratic difference as a first attempt at your distance function.
Then the difference between two people could be calculated with (I assume Person.age, .skin, .hair, etc. have already gone through step 1 and are numeric):
double Difference(Person p1, Person p2) {
double agescale=0.1;
double skinscale=0.5;
double hairscale=1;
double lifestylescale=1;
double agediff = (p1.age-p2.age)*agescale;
double skindiff = (p1.skin-p2.skin)*skinscale;
double hairdiff = (p1.hair-p2.hair)*hairscale;
double lifestylediff = (p1.lifestyle-p2.lifestyle)*lifestylescale;
double diff = sqrt(agediff^2 + skindiff^2 + hairdiff^2 + lifestylediff^2);
return diff;
}
Note that diff in this example is not on a nice scale like (0..1). It's value can range from 0 (no difference) to something large (high difference). Also, this method is almost completely unscientific, it is just designed to quickly give you a working difference metric.
Look at algorithms for computing srting difference. Its very similar to what you need. Store your attributes as a bit string and compute the distance between the strings
You should read these two topics.
Most popular clustering algorithm k - means
And similarity matrix are essential in clustering

Understanding Bayes' Theorem

I'm working on an implementation of a Naive Bayes Classifier. Programming Collective Intelligence introduces this subject by describing Bayes Theorem as:
Pr(A | B) = Pr(B | A) x Pr(A)/Pr(B)
As well as a specific example relevant to document classification:
Pr(Category | Document) = Pr(Document | Category) x Pr(Category) / Pr(Document)
I was hoping someone could explain to me the notation used here, what do Pr(A | B) and Pr(A) mean? It looks like some sort of function but then what does the pipe ("|") mean, etc?
Pr(A | B) = Probability of A happening given that B has already happened
Pr(A) = Probability of A happening
But the above is with respect to the calculation of conditional probability. What you want is a classifier, which uses this principle to decide whether something belongs to a category based on the previous probability.
See http://en.wikipedia.org/wiki/Naive_Bayes_classifier for a complete example
I think they've got you covered on the basics.
Pr(A | B) = Pr(B | A) x Pr(A)/Pr(B)
reads: the probability of A given B is the same as the probability of B given A times the probability of A divided by the probability of B. It's usually used when you can measure the probability of B and you are trying to figure out if B is leading us to believe in A. Or, in other words, we really care about A, but we can measure B more directly, so let's start with what we can measure.
Let me give you one derivation that makes this easier for writing code. It comes from Judea Pearl. I struggled with this a little, but after I realized how Pearl helps us turn theory into code, the light turned on for me.
Prior Odds:
O(H) = P(H) / 1 - P(H)
Likelihood Ratio:
L(e|H) = P(e|H) / P(e|¬H)
Posterior Odds:
O(H|e) = L(e|H)O(H)
In English, we are saying that the odds of something you're interested in (H for hypothesis) are simply the number of times you find something to be true divided by the times you find it not to be true. So, say one house is robbed every day out of 10,000. That means that you have a 1/10,000 chance of being robbed, without any other evidence being considered.
The next one is measuring the evidence you're looking at. What is the probability of seeing the evidence you're seeing when your question is true divided by the probability of seeing the evidence you're seeing when your question is not true. Say you are hearing your burglar alarm go off. How often do you get that alarm when it's supposed to go off (someone opens a window when the alarm is on) versus when it's not supposed to go off (the wind set the alarm off). If you have a 95% chance of a burglar setting off the alarm and a 1% chance of something else setting off the alarm, then you have a likelihood of 95.0.
Your overall belief is just the likelihood * the prior odds. In this case it is:
((0.95/0.01) * ((10**-4)/(1 - (10**-4))))
# => 0.0095009500950095
I don't know if this makes it any more clear, but it tends to be easier to have some code that keeps track of prior odds, other code to look at likelihoods, and one more piece of code to combine this information.
I have implemented it in Python. It's very easy to understand because all formulas for Bayes theorem are in separate functions:
#Bayes Theorem
def get_outcomes(sample_space, f_name='', e_name=''):
outcomes = 0
for e_k, e_v in sample_space.items():
if f_name=='' or f_name==e_k:
for se_k, se_v in e_v.items():
if e_name!='' and se_k == e_name:
outcomes+=se_v
elif e_name=='':
outcomes+=se_v
return outcomes
def p(sample_space, f_name):
return get_outcomes(sample_space, f_name) / get_outcomes(sample_space, '', '')
def p_inters(sample_space, f_name, e_name):
return get_outcomes(sample_space, f_name, e_name) / get_outcomes(sample_space, '', '')
def p_conditional(sample_space, f_name, e_name):
return p_inters(sample_space, f_name, e_name) / p(sample_space, f_name)
def bayes(sample_space, f, given_e):
sum = 0;
for e_k, e_v in sample_space.items():
sum+=p(sample_space, e_k) * p_conditional(sample_space, e_k, given_e)
return p(sample_space, f) * p_conditional(sample_space, f, given_e) / sum
sample_space = {'UK':{'Boy':10, 'Girl':20},
'FR':{'Boy':10, 'Girl':10},
'CA':{'Boy':10, 'Girl':30}}
print('Probability of being from FR:', p(sample_space, 'FR'))
print('Probability to be French Boy:', p_inters(sample_space, 'FR', 'Boy'))
print('Probability of being a Boy given a person is from FR:', p_conditional(sample_space, 'FR', 'Boy'))
print('Probability to be from France given person is Boy:', bayes(sample_space, 'FR', 'Boy'))
sample_space = {'Grow' :{'Up':160, 'Down':40},
'Slows':{'Up':30, 'Down':70}}
print('Probability economy is growing when stock is Up:', bayes(sample_space, 'Grow', 'Up'))
Pr(A | B): Conditional probability of A : i.e. probability of A, given that all we know is B
Pr(A) : Prior probability of A
Pr is the probability, Pr(A|B) is the conditional probability.
Check wikipedia for details.
the pipe (|) means "given".
The probability of A given B is equal to the probability of B given A x Pr(A)/Pr(B)
Based on your question I can strongly advise that you need to read some undergraduate book on Probability Theory first. Without this you will not advance properly with your task on Naive Bayes Classifier.
I would recommend you this book http://www.athenasc.com/probbook.html or look at MIT OpenCourseWare.
The pipe is used to represent conditional probability.
Pr(A | B) = Probability of A given B
Example:
Let's say you are not feeling well and you surf the web for the symptoms. And the internet tells you that if you have these symptoms then you have XYZ disease.
In this case:
Pr(A | B) is what you are trying to find out, which is:
The probability of you having XYZ GIVEN THAT you have certain symptoms.
Pr(A) is the probability of having the disease XYZ
Pr(B) is the probability of having those symptoms
Pr(B | A) is what you find out from the internet, which is:
The probability of having the symptoms GIVEN THAT you have the disease.

Resources