After a few epochs, the difference between Valid loss and Loss increases - conv-neural-network

I'm trying to train the model on a MagnaTagAtune dataset. Is the model properly trained? What is the problem, does anyone know? Will waiting solve the problem?
The results are shown in the image.
enter image description here
Thank you pseudo_random_here for your answer. Your tips were helpful, but the problem was still there.
Unfortunately, changing the learning rate did not work. Now, after your advice, I will use the SGD optimizer with a learning rate of 0.1. I even used another model that was for this but the problem was not solved.
from keras.optimizers import SGD
opt = SGD(lr=0.1)
model.compile(loss = "categorical_crossentropy", optimizer = opt)

Short answer: I would say your val_loss is too high and waiting is unlikely to solve your problem
Explanation: I believe there are two possibilities here:
Your architecture is not suitable for the data
Your learning rate is too small
PS. It would help a lot if you were to provide info on what architecture of NNs you are using, what loss function we are looking at and what exactly is it that you are predicting?

Related

Why accuracy train not constant in MLPClassifier

I am beginner in neural network. and I'm trying to do text classification using MLPClassifier. I'm confused by the results of the accuracy of the train which changes every time I re-run it. The code I use is as follows:
classifier2 = MLPClassifier(activation='logistic',
batch_size=32,hidden_layer_sizes=(200),learning_rate='constant',learning_rate_init=0.01,
max_iter=100,random_state=None, solver='adam', verbose=2, beta_1=0.9, beta_2=0.999, epsilon=1e-8,
n_iter_no_change=10, early_stopping=True, warm_start=True)
classifier2 = classifier2.fit(Train_X1_Tfidf, Train_Y1)
classifier2.score(Train_X1_Tfidf, Train_Y1)
although the difference is not significant, as far as I have tried, the biggest difference in accuracy is only around 3%. is there any explanation about this? Thank you if someone wants to help explain.

Abrupt increase in RMSE in LSTM while working on Time Series Prediction

I have the following LSTM network(Fig 1) for predicting the Bitcoin Price. The input is every hour close price of Bitcoin. I am facing some issues and any advice is appreciated.
Earlier on the same network, my RMSE on testing and training set was 6.71 and 7.41 RMSE. I recompiled the whole code and there was an abrupt increase to 233.51 for the training set and 345.56 for the testing set. Can anyone help me with finding out the reason behind this?
Also, How to improve the accuracy of my network as it very low in every iteration?
How should I decide the parameters for my LSTM network. (units, epochs, batch_size, time_steps to input)
Thank you in advance for any help extended.
Your question requires a lot more information. For example, data size, timesteps lookback, data preprocessing procedure, etc. But I would recommend you to debug your problem with the following method. First, check whether your input/output data are processed properly or not. Then, try to train the simpler model apart from LSTM as it could result in overfitting. But sometimes, if the input signal is too random, it is normal that your model results would highly fluctuate as there's no correlation in data.
PS. never use the Machine Learning model to predict stock price. It never works.

How to properly use BERT in keras for classification

I am having an issue with using BERT for classification of text within my database. Previously, I have used GLoVE and ELMo that work quite ok. Also Random forests give me quite good F1-scores (over 0.85), however, when using BERT, I am stuck around 0.55. I was trying to modify learning rate for Adam optimizer, used anything between 0.001 to 0.000001, but nothing really helps.
This is my code: https://github.com/EuropeanSocialInnovationDatabase/ESID-main/blob/development/TextMining/Classifiers/DatabaseWithKickStarter/NNClassifierTest2.py
If anyone can pin the problem down, I would be really grateful.

Wrong predictions from MNSIT keras model

I am new to neural networks so I tried my first neural network which is pretty close to one at keras learn page,given below:
https://github.com/aakarsh1011/Neural-Network/blob/master/MNSIT%20classification.ipynb
Kindlly look at the ending where I red a random image and tried to predict it which comes out as a bag, and when trained at epocs=5 it predicted it as a sandal.
Is something wrong with my code or labeling.
UPDATE - Being new to the field I didn't know the importance of epochs so I asked this question, I was afraid that I don't over-fit the model or train train too much. But there is no definite way to do this, it's all try and error. GOOD LUCK!
First of all, as far as I can see, your code is correct. Your model predicting the wrong item can be caused by the model not being trained for long enough. I would highly recommend you to set epochs=100 and you will be able to see the model's accuracy rise. You should generally always try to give your model as many epochs as possible for training. It will simply take some time. Try out some different numbers of epochs to find the one not taking too long, but still giving an acceptable result.

How do I know if my tensorflow structure is good for my problem?

There are two sets of very similar code below with a very simple input as an illustrative example to my question. I think an explanation to the following observation can somehow answer my question. Thanks!
When I run the following code, the model can be trained quickly and can predict good results.
import tensorflow as tf
import numpy as np
from tensorflow import keras
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(xs, ys, epochs=1000)
print(model.predict([7.0]))
However, when i run the following code, which is very similar to the one above, the model is trained very slowly and may not be well trained and give bad predictions (i.e. the loss becomes <1 easily with the code above but stays at around 20000 with the code below)
model = keras.Sequential()# Your Code Here#
model.add(keras.layers.Dense(2,activation = 'relu',input_shape = (1,)))
model.add(keras.layers.Dense(1))
#model.compile(optimizer=tf.train.AdamOptimizer(0.1),
#loss='mean_squared_error')
model.compile(optimizer = tf.train.AdamOptimizer(1),loss = 'mean_squared_error')
#model.compile(# Your Code Here#)
xs = np.array([1,2,3,4,5,6,7,8,9,10], dtype=float)# Your Code Here#
ys = np.array([100,150,200,250,300,350,400,450,500,550], dtype=float)# Your Code Here#
model.fit(xs,ys,epochs = 1000)
print(model.predict([7.0]))
One more note: when I train my model with the second set of code, the model may be well trained occasionally (~8 out of 10 times it is not well trained, and loss remains >10000 after 1000 epochs).
I don't think there is any direct way to choose best deep architecture rather doing multiple experiments by varying hyper-parameters and changing the architecture. Compare the performance of each and every experiment and choose the best one. There are few articles listed below which may be helpful for you.
link-1, link-2, link-3

Resources