My goal is to make binary classification, using neural network.
The problem is that dataset is unbalanced, I have 90% of class 1 and 10 of class 0.
To deal with it I want to use Stratified cross-validation.
The problem that is I am working with Pytorch, I can't find any example and documentation doesn't provide it, and I'm student, quite new for neural networks.
Can anybody help?
Thank you!
The easiest way I've found is to do you stratified splits before passing your data to Pytorch Dataset and DataLoader. That lets you avoid having to port all your code to skorch, which can break compatibility with some cluster computing frameworks.
Have a look at skorch. It's a scikit-learn compatible neural network library that wraps PyTorch. It has a function CVSplit for cross validation or you can use sklearn.
From the docs:
net = NeuralNetClassifier(
module=MyModule,
train_split=None,
)
from sklearn.model_selection import cross_val_predict
y_pred = cross_val_predict(net, X, y, cv=5)
Related
is there a way to pull the weights from the best performing keras model that was created using an Optuna study? The model I am working with is a fully connected network with dense layers.
The studies are called using the traditional method:
study = optuna.create_study()
study.optimize(objective, n_trials = 100)
I can supply any additional code that might be necessary.
Thanks!
I'm using Windows 10 machine. Libraries: Keras with Tensorflow 2.0 Embeddings: Glove(100 dimensions).
I am trying to implement an LSTM architecture for multi-label text classification.
I am using different types of fine-tuning to achieve better results but with no luck so far.
The main problem I believe is the difference in class distributions of my dataset but after a lot of tries and errors, I couldn't implement stratified-k-split in Keras.
I am also experimenting with dropout layers, batch sizes, # of layers, learning rates, clip values, validation splits but I get a minimum boost or worst performance sometimes.
For metrics, I use mainly ROC and F1.
I also followed the suggestion from a StackOverflow member who said to delete some of my examples so I can balance my dataset but if I do that I will have a very low number of examples.
What would you suggest to me?
If someone can provide code based on my implementation for
stratified-k-split I would be grateful cause I have checked all the
online resources but can't implement it.
Any tips, suggestions will be really helpful.
Metrics Plots
Dataset form+Embedings form+train-test-split form
Dataset's labels distribution
My LSTM implementation
When generating adversarial examples, it is typically using logits as the output of the neural network, and then train the network with cross-entropy.
However, I found that the tutorial of cleverhans uses log softmax and then convert the pytorch model to a tensorflow model, and finally train the model.
https://github.com/tensorflow/cleverhans/blob/master/cleverhans_tutorials/mnist_tutorial_pytorch.py#L65
I am wondering if anyone has the idea about whether using logits instead of log_softmax will make any difference?
As you said, when we get logits from a neural network, we train it using CrossEntropyLoss. An alternative way is to compute the log_softmax and then train the network by minimizing the negative log-likelihood (NLLLoss).
Both approaches are basically the same if you are training a network for classification tasks. However, if you have a different objective function, you may find one of these two techniques, particularly useful in your scenario.
Reference
CrossEntropyLoss
NLLLoss
I want to use the sklearn AdaBoostRegressor
with different base estimators. The general AdaBoost introduction does not help too much, since they use the
DecisionTreeClassifier
Where do I find a list of all base possible base estimators?
Could I use a neural Network, too?
What qualifies the possible base estimators?
Any Regressor model which depicts the sklearn's RegressorMixin() can be fed as the base estimator.
Yes, you can use neural network or simple linear regressor as base estimators.
I have been following the http://deeplearning.net/tutorial/ tutorial on how to train an ANN to classify the MNIST numbers. I am now at the "Convolutional Neural Networks" chapter. I want to use the trained network on single examples (MNIST images) and get the predictions. Is there a way to do that?
I have looked ahead in the tutorial and on google but can't find anything.
Thanks a lot in advance for any kind of help!
The material in the Theano tutorial in the earlier chapters, before reaching the Convolutional Neural Networks (CNN) chapter, give a good overview of how Theano works and some of the components the CNN sample code uses. It might be reasonable to assume that students reaching this point have developed their understanding of Theano sufficiently to figure out how to modify the code to extract the model's predictions. Here's a few hints.
The CNN's output layer, called layer3, is an instance of the LogisticRegression class, introduced in an earlier chapter.
The LogisticRegression class has an attribute called y_pred. The comments next to the code which assigns that attribute's values says
symbolic description of how to compute prediction as class whose
probability is maximal
Looking for places where y_pred is used in the logistic regression sample will highlight a function called predict(). This does for the logistic regression sample what is desired of the CNN example.
If one follows the same approach, using layer3.y_pred as the output of a new Theano function, the model's predictions will become apparent.