I have a situation where:
My training accuracy is 93%
CV accuracy is 55%
Test accuracy is 57%
I think this is a classical case of overfitting.
As per my knowledge, I can use regularization.
I have read cross validation will also helps in solving my over fitting problem.
Some inquiries I have regarding this:
Whether cross validation is used only for hyperparameter tuning, or will it have a role in solving over fitting problem?
If cross validation solves overfitting problems, how?
Whether cross validation is used only as a check to see whether the model is over fitting or not?
I think you are confused on what exactly cross validation is. I will link to OpenML's explanation for 10-fold cross validation so you get a better idea.
Over-fitting occurs normally when there is not enough data for your model to train on, resulting in it learning patterns/similarities between the data set that is not helpful, such as putting too much focus on outlying data that would be ignored if given a larger data set.
Now to your questions:
1-2. Cross-validation is just one solution that is helpful for preventing/solving over-fitting. Through partitioning the data set into k-sub groups, or folds, you then can train your model on k-1 folds. The last fold will be used as your unseen validation data to test your model upon. This will sometimes help prevent over-fitting. A factor in this working though depends on how long/how many epochs you are training your data for. Since you said you have a relatively small data set, you want to make sure you aren't 'over-learning' on this data. Implementing cross-validation will not do you much good if you are training for hundreds/thousands of epochs on a really small data set.
Cross-validation doesn't tell you if your data is being over-fitted. It may give you hints that it is if your results are vastly different after several times running the program, but it is not going to be clear cut.
The biggest problem, and you said it yourself in the comments, is you don't have a lot of data. The best, although not always the easiest way, is to increase your data size so your model won't learn unimportant tendencies and put too much focus on the outliers.
I will link to a website that is incredibly helpful in explaining the problems of over-fitting and gives a variety of ways to attempt to overcome this problem.
Let me know if I was of help!
Related
I am using K-means for topic modelling using Word2Vec and would like to understand the implications of vectorizing up to, let's say, 10 dimensions, against embedding it with 200 dimensions and then using PCA to get down to 10. Does the second approach make sense at all?
Which one worked better for your specific purposes, & your specific data, after trying both & comparing the end-results against each other, either in some ad-hoc ("eyeballing") or rigorous way?
There's no reason to prematurely reject any approach, given how many details about your data & ultimate end-goals are unstated.
It would be atypical to train a word2vec model to have only 10 dimensions. Published work most often shows the use of 100 to 1000 dimensions, often 300 or 400, assuming you've got enough bulk training data to make the algorithm worthwhile.
(Word2vec needs a lot of varied training text, with many contrasting usage examples for every word of interest, to generate good results. You may occasionally see toy-sized demos, on smaller amounts of data, just to quickly show steps, or some major qualities of the results. But good results, in the aspects for which word2vec is most appreciated, depend on plentiful training data.)
Also, whether or not your aims would be helped by the extra step of PCA to reduce the dimensionality of a larger word2vec model seems another separable question, to be determined experimentally by comparing results with and without that step, on your actual data/problem, rather than guessed at from intuitions from other projects that might not be comparable.
I am trying a multi-task regression model. However, the ground-truth labels of different tasks are on different scales. Therefore, I wonder whether it is necessary to normalize the targets. Otherwise, the MSE of some large-scale tasks will be extremely bigger. The figure below is part of my overall targets. You can certainly find that columns like ASA_m2_c have much higher values than some others.
First, I have already tried some weighted loss techniques to balance the concentration of my model when it does gradient backpropagation. The result shows it didn't perform well.
Secondly, I have seen tremendous discussions regarding normalizing the input data, but hardly discovered any particular talking about normalizing the labels. It's partly because most of the people's problems are classification type and a single task. I do know pytorch provides a convenient approach to normalize the vision dataset by transform.normalize, which is still operated on the input rather than the labels.
Similar questions: https://forums.fast.ai/t/normalizing-your-dataset/49799
https://discuss.pytorch.org/t/ground-truth-label-normalization/26981/19
PyTorch - How should you normalize individual instances
Moreover, I think it might be helpful to provide some details of my model architecture. The input is first fed into a feature extractor and then several generators use the shared output representation from that extractor to predict different targets.
I've been working on a Multi-Task Learning problem where one head has an output of ~500 and another between 0 and 1.
I've tried Uncertainty Weighting but in vain. So I'd be grateful if you could give me a little clue about your studies.(If there is any progress)
Thanks.
I have some 360 odd features on which I am training my neural network model.
The accuracy I am getting is abysmally bad. There is one feature amongst the 360 that is more important than the others.
Right now, it does not enjoy any special status amongst the other features.
Is there a way to lay emphasis on one of the features while training the model? I believe this could improve my model's accuracy.
I am using Python 3.5 with Keras and Scikit-learn.
EDIT: I am attempting a regression problem
Any help would be appreciated
First of all, I would make sure that this feature alone has a decent prediction probability, but I am assuming that you already made sure of it.
Then, one approach that you could take, is to "embed" your 359 other features in a first layer, and only feed in your special feature once you have compressed the remaining information.
Contrary to what most tutorials make you believe, you do not have to add in all features already in the first layer, but can technically insert them at any point in time (or even multiple times).
The first layer that captures your other inputs is then some form of "PCA approximator", where you are embedding a high-dimensional feature space (359 dimensions) into something that is less dominant over your other feature (maybe 20-50 dimensions as a starting point?)
Of course there is no guarantee that this will work, but you might have a much better chance of getting attention on your special feature, although I am fairly certain that in general you should still see an increase in performance if the single feature is strongly enough correlated with your output.
The other question that is still open is the kind of task you are training for, i.e., whether you are doing some form of classification (if so, how many classes?), or regression. This might also influence architectural choices, and the amount of focus you can/should put on a single feature.
There are several feature selection and importance techniques in machine learning. Please follow this link.
Intro
I am making a classifier to recognize presence of defects in pictures, and in the path of improving my models, I tried Batch Normalization, mainly to exploit its ability to fasten convergence.
While it gives the expected speed benefits, I also observed some strange symptoms:
validation metrics are far from good. It smells of overfitting of course
predictions calculated at any point during training are completely wrong, particularly when images are picked from the training dataset; the corresponding metrics match with the (val_loss, val_acc) rather than with (loss, acc) printed during training
This failing to predict is the evidence that worries me the most. A model which does not predict the same as in training, is useless!
Searches
Googling around I found some posts that seem to be related, particularly this one (Keras BN layer is broken) which also claims the existence of a patch and of a pull request, that sadly "was rejected".
This is quite convincing, in that it explains a failure mechanism that matches my observations. As far as I understand, since BN calculates and keeps moving statistics (exponential averages and standard deviations) for doing its job, which require many iterations to stabilize and become significant, of course it will behave bad when it comes to make a prediction from scratch, when those statistics are not mature enough (in case I have misunderstood this concept, please tell me).
Actual Questions
But thinking more thoroughly, this doesn't really close the issue, and actually raises further doubts. I am still perplexed that:
This Keras BN being broken, is said to affect the use case of transfer learning, while mine is a classical case of a convolutional classifier, trained starting form standard glorot initialization. This should have been complained about by thousands of users, while instead there isn't much discussion about)
technically: if my understanding is correct, why aren't these statistics (since they are so fundamental for prediction) saved in the model, so that their latest update is available to make a prediction? It seems perfectly feasible to keep and use them at prediction time, as for any trainable parameter
managementwise: if Keras' BN were really broken, how could such a deadful bug remain unaddressed for more than one year? Isn't really out there anybody using BN and needing predictions out of their models? And not even anybody able to fix it?
more practically: on the contrary, if it is not a bug, but just a bad understanding on how to use it, were do I get a clear illustration of "how to correctly get a prediction in Keras for a model which uses BN?" (demo code would be appreciated)
Obviously I would really love that the right questions is the last, but I had to include the previous ones, given the evidence of someone claiming that Keras BN is broken.
Note to SE OP: before *closing the question as too broad*, please consider that, being not really clear what the issue is (Keras BN being broken, or the users being unable to use it properly), I had to offer more directions, among which whoever wishing to answer can choose.
Details
I am using keras 2.2.4 from a python 3.6 virtual environment (under pyenv/virtualenv).
data are fed through a classic ImageDataGenerator() + flow_from_directory() / flow_from_dataframe() scheme (augmentation is turned off though: only rescale=1./255 is applied), but I also tried to make them static
actually in the end, for verifying the above behaviour, I generated only one dataset x,y=next(valid_generator) and used an unique batch scheme for both training and validation. While on the training side it converges (yes, the aim was exactly to let it overfit!), on the validation side both metrics are poor and predictions are completely wrong and erratic (almost random)
in this setup, if BN is turned off, val_loss and val_acc match exactly with loss and acc, and with those that I can obtain from predictions calulated after training has finished.
Update
In the process of writing a minimal example of the issue, after battling to put in evidence the problem, I recognized that the problem is showing/not showing up in different machines. In particular, the problem is evident on a host running Keras 2.3.1, while another host with Keras 2.2.4 doesn't show it.
I'll post a minimal example here along with specific module versions asap.
I'm doing a programming assignment for Andrew Ng's Deep Learning course on Convolutional Models that involves training and evaluating a model using Keras. What I've observed after a little playing with various knobs is something curious: The test accuracy of the model greatly improves (from 50 percentile to 90 percentile) by setting the validation_fraction parameter on the Model.fit operation to 0. This is surprising to me; I would have thought that eliminating the validation samples would lead to over-fitting of the model, which would, in turn, reduce accuracy on the test set.
Can someone please explain why this is happening?
You're right, there is more training data, but the increase is pretty negligible since dI was setting the validation fraction to 0.1, so that would increase the training data by 11.111...% However, thinking about it some more, I realized that removing the validation step doesn't have any effect on the model, hence no impact on test accuracy. I think that I must have changed some other parameter, too, though I don't remember which.
As Matias says, it means there is more training data to work with.
However, I'd also make sure that the test accuracy is actually increasing from 50 to 90% consistently. Run it over a couple times to make sure. There is a possibility that, because there is very little validation samples, that the model got lucky. That's why it is important to have a lot of validation data - to make sure the model isn't just getting lucky, and that there's actually a method to the madness.
I go over some of the "norms" when it comes to training and testing data in my book about stock prediction (another great way in my opinion to learn about Deep Learning). Feel free to check it out and learn more, as it's great for beginners.
Good Luck!