I have split my data using the sklearn train_test_split function and i am using model.fit in keras to train.
Now during training it prints the training and validation statistics on to the terminal.
What i am interested is in though when it prints the validation stats like accuracy and loss, I want to know a per class missclaffication number.Is that possible? I am training a binary classifier.
Thanks in advance.
Related
i am training an autoencoder neural network for my work purpose.However i am taking
the image numpy array dataset as input(total samples 16110) and want to split dataset into training and test set using the below autoencoder.fit command. Additionally while training the network it is writing like Train on 12856 samples, validate on 3254 samples.
However, i need to save both the training and testing data into separate files. How can i do it?
es=EarlyStopping(monitor='val_loss',mode='min',verbose=1,patience=5)
mc=ModelCheckpoint('best_model.h5',monitor='val_loss',mode='min',save_best_only=True)
history = autoencoder.fit(dataNoise,dataNoise, epochs=30, batch_size=256, shuffle=256,callbacks=[es,mc], validation_split = 0.2)
you can use the train_test_split function from sklearn. See code below
from sklearn.model_selection import train_test_split
train_split=.9 # set this as the % you want for training
train_noise, valid_noise=train_test_split(dataNoise, train_size=train_split, shuffle=True,
random_state=123)
now use train noise as x,y and valid noise for validation data in model.fit
I have set up a ResNet50 network for an optical application. With two input images, the network gives an estimate of 65 values (regression) and it works pretty well. However, the two input images belong to a time series, and the images of the time series will be somewhat correlated over a span of 10-15 times, so I expect that an additional RNN could improve estimates. I have tried to set up the network shown in the figure, using mostly frozen ResNet50 parameter values found by separate training and “TimeDistributed” ResNet50s. However the RNN training does not give useful accuracy.
Full LSTM network
I have now spent 2-3 weeks trying to debug my code (in particular the generator) but I have not found any coding errors. In frustration, I tried to set up the simplest RNN I could think of: A complete Resnet50 with either one or two SimpleRNNs with linear activation. However they do not provide even nearly the same accuracy as the ResNet50 alone in spite of the correlated time series.
SimpleRNN network
So my question is: Is it correct to assume that a single SimpleRNN with linear activation should provide the same accuracy as the ResNet50 alone?
This is a bit speculative, but it might suggest an approach to debug the RNN and answer your question. Here is an extremely simple network with a SimpleRNN and a test input of 2 samples, each with a single time step and single feature: i.e. shape=(2,1,1)
from keras.models import Sequential
from keras.layers import SimpleRNN
import numpy as np
x_train=np.array([[[0.1]],
[[0.2]]])
y_train=np.array([[1],[0]])
print(x_train.shape)
print(x_train)
print(y_train.shape)
print(y_train)
#simple network
model = Sequential()
model.add(SimpleRNN(1,activation=None, use_bias=False, input_shape=(1,1)))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
model.fit(x_train, y_train, epochs=10, batch_size=2)
wgt=model.get_weights()
print(wgt)
print('model.predict(x_train)')
print(model.predict(x_train))
Based on running the above, two weights come out of the RNN network. The first seems to be a simple scaling of the input and the second I'm suspecting is the weight of the recurrent loop which is not actually used for a single time step as in this example. The activation is linear so the result then matches the model.predict.
You may be able to extend approach this to reason about the performance with the Resnet and potentially answer your question. I hope this helps.
Is the line of regression underfitting and if yes what can I do for accurate results? I have not been able to identify such things like if the line of regression is overfitting or underfitting or accurate so suggestions regarding those will also be appreciated. The File "Advertising.csv":-https://github.com/marcopeix/ISL-linear-regression/tree/master/data
#Importing the libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score,mean_squared_error
#reading and knowing the data
data=pd.read_csv('Advertising.csv')
#print(data.head())
#print(data.columns)
#print(data.shape)
#plotting the data
plt.figure(figsize=(10,8))
plt.scatter(data['TV'],data['sales'], c='black')
plt.xlabel('Money Spent on TV ads')
plt.ylabel('Sales')
plt.show()
#storing data into variable and shaping data
X=data['TV'].values.reshape(-1,1)
Y=data['sales'].values.reshape(-1,1)
#calling the model and fitting the model
reg=LinearRegression()
reg.fit(X,Y)
#making predictions
predictions=reg.predict(X)
#plotting the predicted data
plt.figure(figsize=(16,8))
plt.scatter(data['TV'],data['sales'], c='black')
plt.plot(data['TV'],predictions, c='blue',linewidth=2)
plt.xlabel('Money Spent on TV ads')
plt.ylabel('Sales')
plt.show()
r2= r2_score(Y,predictions)
print("R2 score is: ",r2)
print("Accuracy: {:.2f}".format(reg.score(X,Y)))
To work out if your model is underfitting (or overfitting) you need to look at the bias of the model (the distance between the output predicted by your model and the expected output). You can't (to the best of my knowledge) do it just by looking at your code, you need to evaluate your model as well (run it).
As it's a linear regression it's likely that you're underfitting.
I'd suggest splitting your data into a training set and a testing set. You can fit your model on the training set, and see how well it performs on unseen data using the testing set. A model is underfitting if it performs miserably on both the training data as well as the testing data. It's overfitting if it performs brilliantly on the training data but less well on the testing data.
Try something along the lines of:
from sklearn.model_selection import train_test_split
# This will split the data into a train set and a test set, leaving 20% (the test_size parameter) for testing
X, X_test, Y, Y_test = train_test_split(data['TV'].values.reshape(-1,1), data['sales'].values.reshape(-1,1), test_size=0.2)
# Then fit your model ...
# e.g. reg.fit(X,Y)
# Finally evaluate how well it does on the training and test data.
print("Test score " + str(reg.score(X_test, Y_test)))
print("Train score " + str(reg.score(X_test, Y_test)))
Instead of training and testing on same data.
Split your data set into 2,3 sets (train,validation,test)
You may only need to split it in 2 (train,test) use sklearn library function train_test_split
Train your model on training data. Then test on testing data and see if you get good result.
If model's training accuracy is very high but testing is very low then you may say it have overfit. Or if model don't even get high accuracy on train then it is underfitting.
Hope it will you. :)
I have implemented a custom metric based on SIM and when i try the code it works. I have implemented it using tensors and np arrays and both give the same results. However when I start fitting the model the values given back are a lot higher then the values I get when i load the weights generated by the training and applying the same function.
My function is:
def SIM(y_true,y_pred):
n_y_true=y_true/(K.sum(y_true)+K.epsilon())
n_y_pred=y_pred/(K.sum(y_pred)+K.epsilon())
return K.mean(K.sum( K.minimum(n_y_true, n_y_pred)))
When I compile the Keras model I add this to the metrics and during training it gives for example SIM: 0.7092.
When i load the weights and try it the SIM score is around 0.3. The correct weights are loaded (when restarting training with these weights the same values popup). Does anybody know if I am doing anything wrong?
Why are the metrics given back during training so much higher compared to running the function over a batch?
I am new to AI and I am using Keras and Tensorflow to train CNNs. My dataset is heavily unbalanced and I want to use class weights to counter that.
After a small search in the internet I found out that I can use scikit learn's class weight() and sample weight() to get the class weights and sample weights respectively and it can be passed to model.fit() in Keras. But I am unsure how to implement it programmatically for hot encoded outputs.
Can someone provide sample code explaining how to implement classweights for hot encoded outputs with Keras?
Thanks in advance 😁