Need help understanding the MLPClassifier - scikit-learn

I've been working with the MLPClassifier for a while and I think I had a wrong interpretation of what the function is doing for the whole time and I think I got it right now, but I am not sure about that. So I will summarize my understanding and it would be great if you could add your thoughts on the right understanding.
So with the MLPClassifier we are building a neural network based on a training dataset. Setting early_stopping = True it is possible to use a validation dataset within the training process in order to check whether the network is working on a new set as well. If early_stopping = False, no validation within he process is done. After one has finished building, we can use the fitted model in order to predict on a third dataset if we wish to.
What I was thiking before is, that doing the whole training process a validation dataset is being taken aside anways with validating after every epoch.
I'm not sure if my question is understandable, but it would be great if you could help me to clear my thoughts.

The sklearn.neural_network.MLPClassifier uses (a variant of) Stochastic Gradient Descent (SGD) by default. Your question could be framed more generally as how SGD is used to optimize the parameter values in a supervised learning context. There is nothing specific to Multi-layer Perceptrons (MLP) here.
So with the MLPClassifier we are building a neural network based on a training dataset. Setting early_stopping = True it is possible to use a validation dataset within the training process
Correct, although it should be noted that this validation set is taken away from the original training set.
in order to check whether the network is working on a new set as well.
Not quite. The point of early stopping is to track the validation score during training and stop training as soon as the validation score stops improving significantly.
If early_stopping = False, no validation within [t]he process is done. After one has finished building, we can use the fitted model in order to predict on a third dataset if we wish to.
Correct.
What I was thiking before is, that doing the whole training process a validation dataset is being taken aside anways with validating after every epoch.
As you probably know by now, this is not so. The division of the learning process into epochs is somewhat arbitrary and has nothing to do with validation.

Related

LSTM RNN Walking Forward for timeseries

I am confused in using walking forward validation. I have found several pages that have different or not clear wording on "validation" and "test".
Basically, when I apply it to time series forecasting, I divide the data into training, validation and test set. The validation is then a part of the training set. I understood that. Obviously, for time series, time has to be taken into account, so we have to resort to the walking forward method.
My understanding is that the walking forward method is only applied in the training set so that, for example, hyperparameters can be optimized when they are created. The test set plays no role and is only used at the end to evaluate the model. Is this correct?
Or does the walking forward method split the test set?
I see many examples that do not consider this forward method.
If it is the walking_forward method, it is the validation set:
Another question is, if I don't split the validation and training set before the backpropagation and I choose in the "model.fit" the settings:
model.fit(
"data=X_train, y_train",
"validation_data = X_train, y_train",
"valdidation_split=0.1",
"shuffle=False")
is this model similar to the walking_forward method?

Abrupt increase in RMSE in LSTM while working on Time Series Prediction

I have the following LSTM network(Fig 1) for predicting the Bitcoin Price. The input is every hour close price of Bitcoin. I am facing some issues and any advice is appreciated.
Earlier on the same network, my RMSE on testing and training set was 6.71 and 7.41 RMSE. I recompiled the whole code and there was an abrupt increase to 233.51 for the training set and 345.56 for the testing set. Can anyone help me with finding out the reason behind this?
Also, How to improve the accuracy of my network as it very low in every iteration?
How should I decide the parameters for my LSTM network. (units, epochs, batch_size, time_steps to input)
Thank you in advance for any help extended.
Your question requires a lot more information. For example, data size, timesteps lookback, data preprocessing procedure, etc. But I would recommend you to debug your problem with the following method. First, check whether your input/output data are processed properly or not. Then, try to train the simpler model apart from LSTM as it could result in overfitting. But sometimes, if the input signal is too random, it is normal that your model results would highly fluctuate as there's no correlation in data.
PS. never use the Machine Learning model to predict stock price. It never works.

Model underfitting

I have trained a model and it took me quite a while to find the correct hyperparameters.
The model has now been trained for 15h and it seems to to its job quite well.
When I observed the training and validation loss though, the training loss is somewhat higher than the validation loss. (red curve: training, green: validation)
I use dropout to regularize my model and as far as I have understood, droput is is only applied during training which might be the reason.
Now Iam wondering if I have trained a valid model?
It doesn't seem like the model is heavily underfitted?
Thanks in advance for any advice,
cheers,
M
First, check whether you have good data set, i.e., if it is a classification, then get equal number of images for all classes and get it from same source not from different sources. And regularization, dropout are used for overfitting/High variance so don't worry about these.
Then, I think your model is doing good when you trained your model the initial error between them are different but as you increased the epochs then they both got into some steady path. So it is good. And may be reason for this is as I mentioned above or you should try shuffle them then using train_test_split for getting better distribution of training and validation sets.
A plot of learning curves shows a good fit if:
The plot of training loss decreases to a point of stability.
The plot of validation loss decreases to a point of stability and has a small gap with the training loss.
In your case these conditions are satisfied.
Still if you want to deal with High Bias/underfitting then here are few methods:
Train bigger models
Train longer. Use better optimization techniques
Try different Neural Network Architecture and also hyper parameters
And also you can use cross-validation or GridSearchCV for finding better optimizer or hyper parameters but it may take really long because you have to train it on different parameters each time considering your time which is 15 hours then it might be very long but you will find better parameters and then train on it.
Above all I think your model is doing okay.
If your model underfits, its performance will be lower, similar as in the case of overfitting, because actually it can not learn effectively to get the optimal result, i.e the proper function to fit the given distribution. So you have to use less regularization technique e.g. less dropout to get the optimal result.
Furthermore the sampling can also be crucial, because there can be training-validation subsets where your model performs well on validation set and less effective on training set and vice-versa. This is one of the reason why we use crossvalidation and different sampling methods e.g. stratified k-fold.

How to calculate evaluation metrics on training data in TensorFlow's Object Detection API?

I am using the object detector api for quite a while now so training models and use them for inference is all good. Unfortunately, when using TensorBoard to visualize metrics (such as mAP, AR, classification/localization loss) we only get to see those metrics on the validation set. I'd like to calculate the aforementioned metrics also during training so that we can compare train/validation metrics on Tensorboard.
edit: I've stumbled on this post which addresses the same concern how to check both training/eval performances in tensorflow object_detection
Anyone got a pointer on how to achieve this?
You can evaluate your model on the training data by adding the arguments --eval_training_data=True --sample_1_of_n_eval_on_train_examples=10 to the arguments of model_main.
By doing so, you instruct it to perform the evaluation on the training data, and you choose how much to dilute the training data sent to evaluation, since usually the amount of training data is very large.
The thing is that I don't think it's currently possible to evaluate both on training on validation data, but I don't think it's too bad, since usually evaluation on training data is only for sanity check, and not for actual continuous evaluation the model.

Validation loss in keras while training LSTM and stability of LSTM

I am using Keras now to train my LSTM model for a time series problem. My activation function is linear and the optimizer is Rmsprop.
However, i observe the tendency that while the training loss is decreasing slowly overtime, and fluctuate around a small value, the validation loss jumps up and down with a large variance.
Therefore, I come up with two questions:
1. Does the validation loss affect the training process? Will the algorithm look at the validation loss and slow down the learning rate in case it fluctuates alot?
2. How can i make the model more stable so that it will return a more stable values of validation loss?
Thanks
Does the validation loss affect the training process?
No. The validation loss is just a small sample of data that is excluded from the training process. It is run through the network at the end of an epoch, to test how well training is going, so that you can check if the model is over fitting (i.e. training loss much < validation loss).
Fluctuation in validation loss
This is bit tougher to answer without the network or data. It could just mean that your model isn't converging well to unseen data, meaning that its not seeing a enough similar trends from training data to validation data, and each time the weights are adjusted to better suit the training data, the model becomes less accurate for the validation set. You could possibly turn down the learning rate, but if your training loss is decreasing slowly, the learning rate is probably fine. I think in this situation, you have to ask yourself a few questions. Do I have enough data? Does a true time series trend exist in my data? Have I normalized my data correctly? Is my network to large for the data I have?
I had this issue - while training loss was decreasing, the validation loss was not decreasing. I checked and found while I was using LSTM:
I simplified the model - instead of 20 layers, I opted for 8 layers.
Instead of scaling within range (-1,1), I choose (0,1), this right there reduced my validation loss by magnitude of one order
I reduced the batch size from 500 to 50 (just trial and error)
I added more features, which I thought intuitively would add some new intelligent information to the X->y pair
Possible reasons:
Your validation set is very small compare to your trainning set which usually happens. A little change of weights makes validation loss fluctuate much more than trainning loss. This may not neccessary mean that your model is overfiting. As long as the overall trendency of validation loss keeps decreasing.
May be your train and validation data are from different sources, they may have different distributions. This may happen when your data is time series, and you split your train/validation data by a specific timestamp.
Does the validation loss affect the training process?
No, validation(forward-pass-once) and training(forward-and-backward) are different processes. Hence a single forword pass does not change how would you train next.
Will the algorithm look at the validation loss and slow down the learning rate in case it fluctuates alot?
No, But I guess you can implement your own method to do so. However, one thing should be noted, the model is trying to learn the best solution to your cost function which are fed by trainning data only, so changing this learning rate by observing validation loss doesnt make too much sense.
How can i make the model more stable so that it will return a more stable values of validation loss?
The reasons are expained above. If it is the first case, enlarge validation set will make your loss looks more stable but it does NOT mean it fits better. My suggestion is as long as your are sure your model does not overfit (gap between train loss and validation loss are not too large ), you can just save the model which gives the lowest validation loss.
If its the second case, it can be complecated depend on your case. You could try to exclude samples in trainning set which are not "similar" with your validation set, or enlarge your model's capacity if you have enough data. Or perhapes add more metrics to monitor how well the training.

Resources