Apologies if this is a stupid question but I have a dataset with two classes I wish to attempt to classify using a U-Net.
When creating the label matrices, do I need to explicitly define the null / base class (everything which isn't a class) or will Keras calculate this automatically?
For example, if I have a set of images where I'd like to classify the regions where there is a dog or where this is a cat, do I need to create a third label matrix which labels everything which is not a dog or cat (and thus, have three classes)?
Furthermore, the null class dominates the images I'm wishing to segment; if I were to use a class_weight, it seems to only accept a dictionary as input whereas I swear before I good specify a list and that would suffice.
If I treat my problem as a two-class problem, I'm assuming I need to specify the weight of the null class too, i.e. class_weight = [nullweight, dogweight, catweight].
Thank you
edit: Attached example
Is this above image a two class or three class problem?
You must specify the other class since the network needs to differentiate between the dog, the cat and the background.
As for the class_weights parameter, the discussion is a little bit more complicated, you cannot assign like you would do in a simple classification problem.
Indeed, in many problems the background constitutes a big part of the image so you need to be careful when approaching such an imbalanced problem.
You need to inspect the parameter sample_weights, not class_weights, you can have a look at these threads:
https://datascience.stackexchange.com/questions/31129/sample-importance-training-weights-in-keras
https://github.com/keras-team/keras/issues/3653
Weighting samples in multiclass image segmentation using keras
image-segmentation-using-keras
Related
I have 5 folders (which represent 5 classes, and each contain about 200 colored images), I want to use "Principal Component Analysis" for image classification.
previously I used Resnet to predict to which class each image belong. but now I want to use the PCA.
I am trying to apply that with code, any help please?
previously I used Resnet to predict to which class each image belong. but now I want to use the PCA.
PCA is not a method for classification. It is a dimensional reduction method that is sometimes used as a processing step.
Take a look at this CrossValidated post for some more explanation. It has an example.
(FYI, saw this because you pinged me via the MATLAB Answers forum.)
I'm gathering training data for multilabel classification. Some of the data fed into this project will not have enough information to assign it to one of the labels. If I train the model with data that belongs to no label, will it avoid labelling new data that is unclear? Do I need to train it with an "Unclear" label or should I just leave this type of data unlabelled?
I can't seem to find the answer to this question in the spaCy docs.
Assuming you really want multilabel classification, i.e. an instance can have zero or multiple classes, then it's fine to have some data without any label. If the model performs correctly, it should also predict no label for similar instances. Be careful however that no label doesn't mean unclear for the model, it means that none of the possible classes apply (they are considered independently).
Note that in the case of multiclass classification, i.e. an instance always has exactly one class, it is impossible to assign no label to an instance. But it would also be suboptimal to create a class 'unclear', because in multiclass classification the model predicts the most likely class, i.e. relatively to the others. Semantically 'no label' is not a regular label comparable to the others.
Technically this is not a programming question (for future reference, better ask such questions on https://datascience.stackexchange.com/ or https://stats.stackexchange.com/).
I have an image dataset with soft labels (i.e. the images don't belong to a single class, but rather I have a probability distribution saying that there's a 66% chance this image belong in one class and 33% chance it belongs in some other class).
I am struggling to figure out how to setup my PyTorch code to allow this to be represented by the model and outputted correctly. The probabilities are saved in a csv file. I have looked at the PyTorch docs and other resources which mention the cross entropy loss function but I am still unclear how to import the data successfully and make use of soft labels.
What you are trying to solve is a multi-label classification task, i.e. instances can be classified with more than one label at a time. You cannot use torch.CrossEntropyLoss since it only allows for single-label targets. So you have two options:
Either use a soft version of the nn.CrossEntropyLoss function, this can be done by implementing the loss by hand allowing for soft targets. You can find such implementation on Soft Cross Entropy in PyTorch.
Or consider the task a multiple "independent" binary classification tasks, in this case, you would use nn.BCEWithLogitsLoss (this layer contains a sigmoid function).
Pytorch CrossEntropyLoss Supports Soft Labels Natively Now
Thanks to the Pytorch team, I believe this problem has been solved with the current version of the torch CROSSENTROPYLOSS.
You can directly input probabilities for each class as target (see the doc).
Here is the forum discussion that pushed this enhancement.
I am working on an imbalanced binary classification problem and the data is 97% in favour of a class. I am using a naive-bayes classifier and i am getting the test cv score as 1 . I have used average_precision_score() also as 1 . what is the intuition behind this result and how can i better classify this problem.
General things you need to do:
1. CV approach that considers class imbalance (something like StratifiedKFold). This way you can be sure that you always have minor class in your test set
2. Another metric (probably even custom one that uses different weights for different error types). For example, take a look at the focal loss
3. Oversampling/downsampling techniques (imblearn in Python)
Further steps
4. Visualization (TSNE). Can give you some ideas about the general pattern
5. Feature importance and feature engineering based on important features (can make classification easier)
5. Another models (depend on (4)), boosting
To better classify the problem you need to deal with class imbalance issue. Try reading articles on how to handle class imbalances like this one:
https://www.analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem/
I am doing transfer-learning/retraining using Tensorflow Inception V3 model. I have 6 labels. A given image can be one single type only, i.e, no multiple class detection is needed. I have three queries:
Which activation function is best for my case? Presently retrain.py file provided by tensorflow uses softmax? What are other methods available? (like sigmoid etc)
Which Optimiser function I should use? (GradientDescent, Adam.. etc)
I want to identify out-of-scope images, i.e. if users inputs a random image, my algorithm should say that it does not belong to the described classes. Presently with 6 classes, it gives one class as a sure output but I do not want that. What are possible solutions for this?
Also, what are the other parameters that we may tweak in tensorflow. My baseline accuracy is 94% and I am looking for something close to 99%.
Since you're doing single label classification, softmax is the best loss function for this, as it maps your final layer logit values to a probability distribution. Sigmoid is used when it's multilabel classification.
It's always better to use a momentum based optimizer compared to vanilla gradient descent. There's a bunch of such modified optimizers like Adam or RMSProp. Experiment with them to see what works best. Adam is probably going to give you the best performance.
You can add an extra label no_class, so your task will now be a 6+1 label classification. You can feed in some random images with no_class as the label. However the distribution of your random images must match the test image distribution, else it won't generalise.