I am trying to define Multivariate Gaussian distribution with randomly generated covariance matrix:
psi = torch.zeros(512).normal_(0., 1.).requires_grad_(True)
# Generate random matrix
Sigma_k = torch.rand(512, 512)
# Make it symmetric positive
Sigma_k = Sigma_k * Sigma_k.t()
# Make it definite
Sigma_k.add_(0.001, torch.eye(512)).requires_grad_(True)
multivariate_normal.MultivariateNormal(psi, Sigma_k)
But I end up with getting an exception:
RuntimeError: Lapack Error in potrf : the leading minor of order 2 is not positive definite at /Users/soumith/mc3build/conda-bld/pytorch_1549597882250/work/aten/src/TH/generic/THTensorLapack.cpp:658
What is the proper way to generate positive definite square matrix?
The answer is one should make a dot product of matrix A and it's transpose matrix (A.t()) in order to obtain a positive semi-definite matrix. The last thing is to ensure that it is definite (strictly greater than zero).
With Pytorch:
Sigma_k = torch.rand(512, 512)
Sigma_k = torch.mm(Sigma_k, Sigma_k.t())
Sigma_k.add_(torch.eye(512))
Formal algorithm is described here.
In "make it definite"
tensor.add() does not change tensor, but only returns a changed version.
You want to use tensor.add_()
Related
I have a binary classification problem for detecting AO/Non-AO images, using PyTorch for this purpose.
First, I load the data using the ImageFolder utility.
The Dataset class to label mapping in dataset.class_to_idx is {0: 'AO', 1: 'Non-AO'}.
So, my 'positive class' AO, is assigned a label 0, and my 'negative class' Non-AO is assigned a label 1.
Then I train and validate the model without any issues.
When I come to testing, I need to calculate some metrics on the test data.
Here is where I am confused.
[Method A]
fpr, tpr, thresholds = roc_curve(y_true, y_score)
roc_auc = auc(fpr, tpr)
[Method B]
# because 0 is my actual 'positive' class for this problem
fpr, tpr, thresholds = roc_curve(y_true, y_score, pos_label=0)
roc_auc = auc(fpr, tpr)
Now, this second curve is basically the mirror of the first one along the diagonal, right?
And I think, that it can't be the correct curve, because I checked the accuracy of the model by directly comparing y_true and y_pred to get the following accuracies.
Apart from this, here is what my confusion matrix looks like.
So, my first question is, am I doing something wrong? What is the significance of the curve from Method B? Can I say that Method A gives me the correct ROC curve for my classification task? If not, then how do I proceed for getting the correct curve?
What does true positive or true negative or any of the other terms signify for my confusion matrix? Does the matrix consider 0 : AO as negative and 1 : Non-AO as positive (I think so, yes) or the vice versa?
If 0 is indeed being considered as negative, when I actually want 0 to be considered as positive, how can I make changes to reflect so in the matrix (because I am using the matrix later to calculate other matrics like specificity, sensitivity, etc) ?
I am doing a project on multiclass semantic segmentation. I have formulated a model that outputs pretty descent segmented images by decreasing the loss value. However, I cannot evaluate the model performance in metrics, such as meanIoU or Dice coefficient.
In case of binary semantic segmentation it was easy just to set the threshold of 0.5, to classify the outputs as an object or background, but it does not work in the case of multiclass semantic segmentation. Could you please tell me how to obtain model performance on the aforementioned metrics? Any help will be highly appreciated!
By the way, I am using PyTorch framework and CamVid dataset.
If anyone is interested in this answer, please also look at this issue. The author of the issue points out that mIoU can be computed in a different way (and that method is more accepted in literature). So, consider that before using the implementation for any formal publication.
Basically, the other method suggested by the issue-poster is to separately accumulate the intersections and unions over the entire dataset and divide them at the final step. The method in the below original answer computes intersection and union for a batch of images, then divides them to get IoU for the current batch, and then takes a mean of the IoUs over the entire dataset.
However, this below given original method is problematic because the final mean IoU would vary with the batch-size. On the other hand, the mIoU would not vary with the batch size for the method mentioned in the issue as the separate accumulation would ensure that batch size is irrelevant (though higher batch size can definitely help speed up the evaluation).
Original answer:
Given below is an implementation of mean IoU (Intersection over Union) in PyTorch.
def mIOU(label, pred, num_classes=19):
pred = F.softmax(pred, dim=1)
pred = torch.argmax(pred, dim=1).squeeze(1)
iou_list = list()
present_iou_list = list()
pred = pred.view(-1)
label = label.view(-1)
# Note: Following for loop goes from 0 to (num_classes-1)
# and ignore_index is num_classes, thus ignore_index is
# not considered in computation of IoU.
for sem_class in range(num_classes):
pred_inds = (pred == sem_class)
target_inds = (label == sem_class)
if target_inds.long().sum().item() == 0:
iou_now = float('nan')
else:
intersection_now = (pred_inds[target_inds]).long().sum().item()
union_now = pred_inds.long().sum().item() + target_inds.long().sum().item() - intersection_now
iou_now = float(intersection_now) / float(union_now)
present_iou_list.append(iou_now)
iou_list.append(iou_now)
return np.mean(present_iou_list)
Prediction of your model will be in one-hot form, so first take softmax (if your model doesn't already) followed by argmax to get the index with the highest probability at each pixel. Then, we calculate IoU for each class (and take the mean over it at the end).
We can reshape both the prediction and the label as 1-D vectors (I read that it makes the computation faster). For each class, we first identify the indices of that class using pred_inds = (pred == sem_class) and target_inds = (label == sem_class). The resulting pred_inds and target_inds will have 1 at pixels labelled as that particular class while 0 for any other class.
Then, there is a possibility that the target does not contain that particular class at all. This will make that class's IoU calculation invalid as it is not present in the target. So, you assign such classes a NaN IoU (so you can identify them later) and not involve them in the calculation of the mean.
If the particular class is present in the target, then pred_inds[target_inds] will give a vector of 1s and 0s where indices with 1 are those where prediction and target are equal and zero otherwise. Taking the sum of all elements of this will give us the intersection.
If we add all the elements of pred_inds and target_inds, we'll get the union + intersection of pixels of that particular class. So, we subtract the already calculated intersection to get the union. Then, we can divide the intersection and union to get the IoU of that particular class and add it to a list of valid IoUs.
At the end, you take the mean of the entire list to get the mIoU. If you want the Dice Coefficient, you can calculate it in a similar fashion.
Suppose I need to build a network that takes two inputs:
A patient's information, represented as an array of features
Selected treatment, represented as one-hot encoded array
Now how do I build a network that outputs a 2D probability matrix A where A[i,j] represents the probability the patient will end up at state j under treatment i. Let's say there are n possible states, and under any treatment, the total probability of all n states sums up to 1.
I wanted to do this because I was motivated by a similar network, where the inputs are the same as above, but the output is a 1d array representing the expected lifetime after treatment i is delivered. And such network is built as follows:
def default_dense(feature_shape, n_treatment):
feature_input = keras.layers.Input(feature_shape)
treatment_input = keras.layers.Input((n_treatments,))
hidden_1 = keras.layers.Dense(16, activation = 'relu')(feature_input)
hidden_2 = keras.layers.Dense(16, activation = 'relu')(hidden_1)
output = keras.layers.Dense(n_treatments)(hidden_2)
output_on_action = keras.layers.multiply([output, treatment_input])
model = keras.models.Model([feature_input, treatment_input], output_on_action)
model.compile(optimizer=tf.optimizers.Adam(0.001),loss='mse')
return model
And the training is simply
model.fit(x = [features, encoded_treatments], y = encoded_treatments * lifetime[:, np.newaxis], verbose = 0)
This is super handy because when predicting, I can use np.ones() as the encoded_treatments, and the network gives expected lifetimes under all treatments, thus choosing the best one is one-step. Certainly I can create multiple networks, each for a treatment, but it would be much less efficient.
Now the questions is, can I do the same to probability output?
I have figured it out myself. The trick is to use RepeatVector() and Permute() layers to generate a matrix mask for treatments.
The output is an element-wise Multiply() of the mask and a Softmax() of same size.
In WinBUGS, I am specifying a model with a multinomial likelihood function, and I need to make sure that the multinomial probabilities are all between 0 and 1 and sum to 1.
Here is the part of the code specifying the likelihood:
e[k,i,1:9] ~ dmulti(P[k,i,1:9],n[i,k])
Here, the array P[] specifies the probabilities for the multinomial distribution.
These probabilities are to be estimated from my data (the matrix e[]) using multiple linear regressions on a series of fixed and random effects. For instance, here is the multiple linear regression used to predict one of the elements of P[]:
P[k,1,2] <- intercept[1,2] + Slope1[1,2]*Covariate1[k] +
Slope2[1,2]*Covariate2[k] + Slope3[1,2]*Covariate3[k]
+ Slope4[1,2]*Covariate4[k] + RandomEffect1[group[k]] +
RandomEffect2[k]
At compiling, the model produces an error:
elements of proportion vector of multinomial e[1,1,1] must be between zero and one
If I understand this correctly, this means that the elements of the vector P[k,i,1:9] (the probability vector in the multinomial likelihood function above) may be very large (or small) numbers. In reality, they all need to be between 0 and 1, and sum to 1.
I am new to WinBUGS, but from reading around it seems that somehow using a beta regression rather than multiple linear regressions might be the way forward. However, although this would allow each element to be between 0 and 1, it doesn't seem to get to the heart of the problem, which is that all the elements of P[k,i,1:9] must be positive and sum to 1.
It may be that the response variable can very simply be transformed to be a proportion. I have tried this by trying to divide each element by the sum of P[k,i,1:9], but so far no success.
Any tips would be very gratefully appreciated!
(I have supplied the problematic sections of the model; the whole thing is fairly long.)
The usual way to do this would be to use the multinomial equivalent of a logit link to constrain the transformed probabilities to the interval (0,1). For example (for a single predictor but it is the same principle for as many predictors as you need):
Response[i, 1:Categories] ~ dmulti(prob[i, 1:Categories], Trials[i])
phi[i,1] <- 1
prob[i,1] <- 1 / sum(phi[i, 1:Categories])
for(c in 2:Categories){
log(phi[i,c]) <- intercept[c] + slope1[c] * Covariate1[i]
prob[i,c] <- phi[i,c] / sum(phi[i, 1:Categories])
}
For identifibility the value of phi[1] is set to 1, but the other values of intercept and slope1 are estimated independently. When the number of Categories is equal to 2, this collapses to the usual logistic regression but coded for a multinomial response:
log(phi[i,2]) <- intercept[2] + slope1[2] * Covariate1[i]
prob[i,2] <- phi[i, 2] / (1 + phi[i, 2])
prob[i,1] <- 1 / (1 + phi[i, 2])
ie:
logit(prob[i,2]) <- intercept[2] + slope1[2] * Covariate1[i]
prob[i,1] <- 1 - prob[i,2]
In this model I have indexed slope1 by the category, meaning that each level of the outcome has an independent relationship with the predictor. If you have an ordinal response and want to assume that the odds ratio associated with the covariate is consistent between successive levels of the response, then you can drop the index on slope1 (and reformulate the code slightly so that phi is cumulative) to get a proportional odds logistic regression (POLR).
Edit
Here is a link to some example code covering logistic regression, multinomial regression and POLR from a course I teach:
http://runjags.sourceforge.net/examples/squirrels.R
Note that it uses JAGS (rather than WinBUGS) but as far as I know there are no differences in model syntax for these types of models. If you want to quickly get started with runjags & JAGS from a WinBUGS background then you could follow this vignette:
http://runjags.sourceforge.net/quickjags.html
I am trying to implement Expectation Maximization algorithm(Gaussian Mixture Model) on a data set data=[[x,y],...]. I am using mv_norm.pdf(data, mean,cov) function to calculate cluster responsibilities. But after calculating new values of covariance (cov matrix) after 6-7 iterations, cov matrix is becoming singular i.e determinant of cov is 0 (very small value) and hence it is giving errors
ValueError: the input matrix must be positive semidefinite
and
raise np.linalg.LinAlgError('singular matrix')
Can someone suggest any solution for this?
#E-step: Compute cluster responsibilities, given cluster parameters
def calculate_cluster_responsibility(data,centroids,cov_m):
pdfmain=[[] for i in range(0,len(data))]
for i in range(0,len(data)):
sum1=0
pdfeach=[[] for m in range(0,len(centroids))]
pdfeach[0]=1/3.*mv_norm.pdf(data[i], mean=centroids[0],cov=[[cov_m[0][0][0],cov_m[0][0][1]],[cov_m[0][1][0],cov_m[0][1][1]]])
pdfeach[1]=1/3.*mv_norm.pdf(data[i], mean=centroids[1],cov=[[cov_m[1][0][0],cov_m[1][0][1]],[cov_m[1][1][0],cov_m[0][1][1]]])
pdfeach[2]=1/3.*mv_norm.pdf(data[i], mean=centroids[2],cov=[[cov_m[2][0][0],cov_m[2][0][1]],[cov_m[2][1][0],cov_m[2][1][1]]])
sum1+=pdfeach[0]+pdfeach[1]+pdfeach[2]
pdfeach[:] = [x / sum1 for x in pdfeach]
pdfmain[i]=pdfeach
global old_pdfmain
if old_pdfmain==pdfmain:
return
old_pdfmain=copy.deepcopy(pdfmain)
softcounts=[sum(i) for i in zip(*pdfmain)]
calculate_cluster_weights(data,centroids,pdfmain,soft counts)
Initially, I've passed [[3,0],[0,3]] for each cluster covariance since expected number of clusters is 3.
Can someone suggest any solution for this?
The problem is your data lies in some manifold of dimension strictly smaller than the input data. In other words for example your data lies on a circle, while you have 3 dimensional data. As a consequence when your method tries to estimate 3 dimensional ellipsoid (covariance matrix) that fits your data - it fails since the optimal one is a 2 dimensional ellipse (third dimension is 0).
How to fix it? You will need some regularization of your covariance estimator. There are many possible solutions, all in M step, not E step, the problem is with computing covariance:
Simple solution, instead of doing something like cov = np.cov(X) add some regularizing term, like cov = np.cov(X) + eps * np.identity(X.shape[1]) with small eps
Use nicer estimator like LedoitWolf estimator from scikit-learn.
Initially, I've passed [[3,0],[0,3]] for each cluster covariance since expected number of clusters is 3.
This makes no sense, covariance matrix values has nothing to do with amount of clusters. You can initialize it with anything more or less resonable.