I am working on a project where I am using custom callback with earlystopping callback, in this my model training not stops even val_loss not improving much.
Here is my implmentation:
class CustomCallback(keras.callbacks.Callback):
def __init__(self, x, y):
self.x = x
self.y = y
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict(self.x)
error_rate = np.sum(self.y == y_pred)
print(f'Error number:: {error_rate}')
logs['error_rate'] = error_rate
early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2)
custom_callback = CustomCallback(X_data, y_data)
model.fit(train_data, y_train, epochs=100, batch_size=32, validation_data=(cv_data, y_cv), callbacks=[early_stop, custom_callback])
What is wrong in my implementation?
Why not use a custom metric instead of a callback?
def error_rate(y_true, y_pred):
rate = K.cast(K.equal(y_true, y_pred), K.floatx())
return keras.backend.sum(rate)
Are you passing label numbers or one hot tensors as y?? Usually it should be rounding first (there will be nothing equal if you don't)
def error_rate(y_true, y_pred):
y_pred = K.cast(K.greater(y_pred, 0.5), K.floatx())
ate = K.cast(K.equal(y_true, y_pred), K.floatx())
return keras.backend.sum(rate)
Use it as a metric:
model.compile(......, metrics=[error_rate, ...])
Try passing the min_delta argument in EarlyStopping with some value so that an absolute change less than min_delta will count as no improvement and it will stop training.
Related
I'm a beginner and just trying to get in pytorch and neural networks. Therefore I created some dataset. The dataset consists of two input variables and one output variable (basicly the output is a linear function with some noise). Now I want to set up a neural network and train it with the dataset. I followed some tutorial and wrote this code:
df = pd.read_csv(r" ... .csv")
X = df[["x", "y"]]
y = df[["goal"]]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42)
X_train, y_train = np.array(X_train), np.array(y_train)
X_test, y_test = np.array(X_test), np.array(y_test)
# Convert data to torch tensors
class Data(Dataset):
def __init__(self, X, y):
self.X = torch.from_numpy(X.astype(np.float32))
self.y = torch.from_numpy(y.astype(np.float32))
self.len = self.X.shape[0]
def __getitem__(self, index):
return self.X[index], self.y[index]
def __len__(self):
return self.len
batch_size = 32
# Instantiate training and test data
train_data = Data(X_train, y_train)
train_dataloader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True)
test_data = Data(X_test, y_test)
test_dataloader = DataLoader(dataset=test_data, batch_size=batch_size, shuffle=True)
input_dim = 2
hidden_dim_1 = 2
output_dim = 1
class NeuralNetwork(nn.Module):
def __init__(self, input_dim, hidden_dim_1, output_dim):
super(NeuralNetwork, self).__init__()
self.layer_1 = nn.Linear(input_dim, hidden_dim_1)
self.layer_out = nn.Linear(hidden_dim_1, output_dim)
def forward(self, x):
x = F.relu(self.layer_1(x))
x = self.layer_out(x)
return x
model = NeuralNetwork(input_dim, hidden_dim_1, output_dim)
optimizer = optim.SGD(model.parameters(), lr=0.01)
def train(epoch):
model.train()
for batch_id, (data, target) in enumerate(train_data):
data = Variable(data)
target = Variable(target)
target = target.to(dtype=torch.float32)
optimizer.zero_grad()
out = model(data)
criterion = F.mse_loss
loss = criterion(out, target)
print(loss.detach().numpy())
loss.backward()
optimizer.step()
for epoch in range(1, 30):
train(epoch)
My problem is that the printed loss is extremly high (e8-area) and does not decrease.
I tried to change some settings of the neural network, changed the batchsize, learningrate and tried other optimizers and loss functions. But none of the changes really helped. My research also didn't bring any success. Seems to me that there is a more basic mistake in my coding. What did I wrong?
Thanks in advance!
Your code seems fine to me (although I might miss a bug). It is in general never safe to say which networks will be successful and which won't, but here are some suggestions if you can't see any progress:
Check the input data. Maybe try plotting it to make sure that it actually contains what you think it does. You may print out the inputs, predicted and expected values (or better, view them in a debugger) to see what's wrong.
Normalize the input data. If there are high values in the input / output data, losses may explode. Ensure that most of the values are roughly between -1 and 1.
Lower the learning rate. 0.01 is generally a good starting point, but who knows.
Try training for more epochs. Depending on the noise in your data, this could be necessary.
Try adding more neurons. A linear function should in theory be fine with not that many, but maybe the noise is too 'complex'.
I have defined the following custom model and training loop in Keras:
class CustomModel(keras.Model):
def train_step(self, data):
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
self.compiled_metrics.update_state(y, y_pred)
return {m.name: m.result() for m in self.metrics}
And I am using the following code to train the model on a simple toy data set:
inputs = keras.layers.Input(shape=(1,))
hidden = keras.layers.Dense(1, activation='tanh')(inputs)
outputs = keras.layers.Dense(1)(hidden)
x = np.arange(0, 2*np.pi, 2*np.pi/100)
y = np.sin(x)
nnmodel = CustomModel(inputs, outputs)
nnmodel.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss="mse", metrics=["mae"])
nnmodel.fit(x, y, batch_size=100, epochs=2000)
I want to be able to see the values of the gradient and the trainable_vars variables in the train_step function for each training loop, and I am not sure how to do this.
I have tried to set a break point inside the train_step function in my python IDE and expecting it to stop at the break point for each epoch of the training after I call model.fit() but this didn't happen. I also tried to have them print out the values in the log after each epoch but I am not sure how to achieve this.
I have a model in keras in which I use my custom metric as:
class MyMetrics(keras.callbacks.Callback):
def __init__(self):
initial_value = 0
def on_train_begin(self, logs={}):
...
def on_epoch_end(self, batch, logs={}):
here I calculate my important values
Now, there is a way to visualize them in Tensorboard?
For example if my metric was something like:
def mymetric(y_true,y_pred):
return myImportantValues
I could visualize them in Tensorboard through
mymodel.compile(..., metrics = mymetric)
Is there something similar with a metric callback?
I tried to create a function inside the class MyMetric and pass it to the mymodel.compile but it does not update the values.
You can create an event file with the custom metrics and visualize it in tensorboard directly.
This works for Tensorflow 2.0. In this example, the accuracy/metrics are logged from training history. In your case, you can do it from the on_epoch_end callback.
import datetime
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_log_dir = 'logs/train/' + current_time
train_summary_writer = tf.summary.create_file_writer(train_log_dir)
history = model.fit(x=X, y=y, epochs=100, verbose=1)
for epoch in range(len(history.history['accuracy'])):
with train_summary_writer.as_default():
tf.summary.scalar('loss', history.history['loss'][epoch], step=epoch)
tf.summary.scalar('accuracy', history.history['accuracy'][epoch], step=epoch)
After script execution,
tensorboard --logdir logs/train
https://www.tensorflow.org/tensorboard/r2/get_started#using_tensorboard_with_other_methods
You need to ceate a cutom callback first:
class CustomLogs(Callback):
def __init__(self, validation_data=()):
super(Callback, self).__init__()
self.X_val, self.y_val = validation_data
def on_train_begin(self, logs={}):
## on begin of training, we are creating a instance f1_scores
self.model.f1_scores = []
def on_epoch_end(self, epoch, logs={}):
# calculating micro_avg_f1_score
val_predict_proba = np.array(self.model.predict(self.X_val))
val_predict = np.round(val_predict_proba)
val_targ = self.y_val
#using scikit-learn f1_score
f1 = f1_score(val_targ, val_predict, average='micro')
#appending f1_scores for every epoch
self.model.f1_scores.append(f1)
print('micro_f1_score: ',f1)
#initilize your call back with validation data
customLogs = CustomLogs(validation_data=(X_test, Y_test))
#not change in commpile method
model.compile(optimizer='Adam',loss='CategoricalCrossentropy')
#pass customLogs and validation_data in fit method
model.fit(X_train,
Y_train,
batch_size=32,
validation_data=(X_test, Y_test),
callbacks=[customLogs],
epochs=20)
#After fit method accessing the f1_scores
f1_scores = model.f1_scores
# writing the summary in tensorboard
log_dir='/log'
writer=tf.summary.create_file_writer(log_dir)
for idx in range(len(f1_scores)):
with writer.as_default(step=idx+1):
tf.summary.scalar('f1_scores', f1_scores[idx])
writer.flush ()
Now launch : tensorboard --logdir /log.
You can see the plot of f1_scores in tesorboard scalers
I started using Ignite recently and i found it very interesting.
I would like to train a model using as an optimizer the LBFGS algorithm from the torch.optim module.
This is my code:
from ignite.engine import Events, Engine, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import RootMeanSquaredError, Loss
from ignite.handlers import EarlyStopping
D_in, H, D_out = 5, 10, 1
model = simpleNN(D_in, H, D_out) # a simple MLP with 1 Hidden Layer
model.double()
train_loader, val_loader = get_data_loaders(i)
optimizer = torch.optim.LBFGS(model.parameters(), lr=1)
loss_func = torch.nn.MSELoss()
#Ignite
trainer = create_supervised_trainer(model, optimizer, loss_func)
evaluator = create_supervised_evaluator(model, metrics={'RMSE': RootMeanSquaredError(),'LOSS': Loss(loss_func)})
#trainer.on(Events.ITERATION_COMPLETED)
def log_training_loss(engine):
print("Epoch[{}] Loss: {:.5f}".format(engine.state.epoch, len(train_loader), engine.state.output))
def score_function(engine):
val_loss = engine.state.metrics['RMSE']
print("VAL_LOSS: {:.5f}".format(val_loss))
return -val_loss
handler = EarlyStopping(patience=10, score_function=score_function, trainer=trainer)
evaluator.add_event_handler(Events.COMPLETED, handler)
trainer.run(train_loader, max_epochs=100)
And the error that raises is:
TypeError: step() missing 1 required positional argument: 'closure'
I know that is required to define a closure for the implementation of LBFGS, so my question is how can I do it using ignite? or is there another approach for doing this?
The way to do it is like this:
from ignite.engine import Engine
model = ...
optimizer = torch.optim.LBFGS(model.parameters(), lr=1)
criterion =
def update_fn(engine, batch):
model.train()
x, y = batch
# pass to device if needed as here: https://github.com/pytorch/ignite/blob/40d815930d7801b21acfecfa21cd2641a5a50249/ignite/engine/__init__.py#L45
def closure():
y_pred = model(x)
loss = criterion(y_pred, y)
optimizer.zero_grad()
loss.backward()
return loss
optimizer.step(closure)
trainer = Engine(update_fn)
# everything else is the same
Source
You need to encapsulate all evaluating step with zero_grad and returning step in
for batch in loader():
def closure():
...
return loss
optim.step(closure)
Pytorch docs for 'closure'
I've tried to use the code given from Keras before they're removed. Here's the code:
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def recall(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def fbeta_score(y_true, y_pred, beta=1):
if beta < 0:
raise ValueError('The lowest choosable beta is zero (only precision).')
# If there are no true positives, fix the F score at 0 like sklearn.
if K.sum(K.round(K.clip(y_true, 0, 1))) == 0:
return 0
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
bb = beta ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
def fmeasure(y_true, y_pred):
return fbeta_score(y_true, y_pred, beta=1)
From what I saw, it seems like they use the correct formula. But, when I tried to use it as a metric in the training process, I got exactly equal output for val_accuracy, val_precision, val_recall, and val_fmeasure. I do believe that it might happen even if the formula correct, but I believe it is unlikely. Any explanation for this issue?
since Keras 2.0 metrics f1, precision, and recall have been removed. The solution is to use a custom metric function:
from keras import backend as K
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(loss='binary_crossentropy',
optimizer= "adam",
metrics=[f1])
The return line of this function
return 2*((precision*recall)/(precision+recall+K.epsilon()))
was modified by adding the constant epsilon, in order to avoid division by 0. Thus NaN will not be computed.
Using a Keras metric function is not the right way to calculate F1 or AUC or something like that.
The reason for this is that the metric function is called at each batch step at validation. That way the Keras system calculates an average on the batch results. And that is not the right F1 score.
Thats the reason why F1 score got removed from the metric functions in keras. See here:
https://github.com/keras-team/keras/commit/a56b1a55182acf061b1eb2e2c86b48193a0e88f7
https://github.com/keras-team/keras/issues/5794
The right way to do this is to use a custom callback function in a way like this:
https://github.com/PhilipMay/mltb#module-keras
https://medium.com/#thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2
This is a streaming custom f1_score metric that I made using subclassing. It works for TensorFlow 2.0 beta but I haven't tried it on other versions. What it's doing it keeping track of true positives, predicted positives, and all possible positives throughout the whole epoch and then calculating the f1 score at the end of the epoch. I think the other answers are only giving the f1 score for each batch which isn't really the best metric when we really want the f1 score of the all the data.
I got a raw unedited copy of Aurélien Geron new book Hands-On Machine Learning with Scikit-Learn & Tensorflow 2.0 and highly recommend it. This is how I learned how to this f1 custom metric using sub-classes. It's hands down the most comprehensive TensorFlow book I've ever seen. TensorFlow is seriously a pain in the butt to learn and this guy lays down the coding groundwork to learn a lot.
FYI: In the Metrics, I had to put the parenthesis in f1_score() or else it wouldn't work.
pip install tensorflow==2.0.0-beta1
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
import numpy as np
def create_f1():
def f1_function(y_true, y_pred):
y_pred_binary = tf.where(y_pred>=0.5, 1., 0.)
tp = tf.reduce_sum(y_true * y_pred_binary)
predicted_positives = tf.reduce_sum(y_pred_binary)
possible_positives = tf.reduce_sum(y_true)
return tp, predicted_positives, possible_positives
return f1_function
class F1_score(keras.metrics.Metric):
def __init__(self, **kwargs):
super().__init__(**kwargs) # handles base args (e.g., dtype)
self.f1_function = create_f1()
self.tp_count = self.add_weight("tp_count", initializer="zeros")
self.all_predicted_positives = self.add_weight('all_predicted_positives', initializer='zeros')
self.all_possible_positives = self.add_weight('all_possible_positives', initializer='zeros')
def update_state(self, y_true, y_pred,sample_weight=None):
tp, predicted_positives, possible_positives = self.f1_function(y_true, y_pred)
self.tp_count.assign_add(tp)
self.all_predicted_positives.assign_add(predicted_positives)
self.all_possible_positives.assign_add(possible_positives)
def result(self):
precision = self.tp_count / self.all_predicted_positives
recall = self.tp_count / self.all_possible_positives
f1 = 2*(precision*recall)/(precision+recall)
return f1
X = np.random.random(size=(1000, 10))
Y = np.random.randint(0, 2, size=(1000,))
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
model = keras.models.Sequential([
keras.layers.Dense(5, input_shape=[X.shape[1], ]),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='SGD', metrics=[F1_score()])
history = model.fit(X_train, y_train, epochs=5, validation_data=(X_test, y_test))
As #Diesche mentioned the main problem in implementing f1_score this way is that it is called at every batch step and leads to confusing results more than anything else.
I've been struggling some time with this issue but eventually worked my way around the problem by using a callback: at the end of an epoch the callback predicts on the data (in this case I chose to only apply it to my validation data) with the new model parameters and gives you coherent metrics evaluated on the whole epoch.
I'm using tensorflow-gpu (1.14.0) on python3
from tensorflow.python.keras.models import Sequential, Model
from sklearn.metrics import f1_score
from tensorflow.keras.callbacks import Callback
from tensorflow.python.keras import optimizers
optimizer = optimizers.SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=['accuracy'])
model.summary()
class Metrics(Callback):
def __init__(self, model, valid_data, true_outputs):
super(Callback, self).__init__()
self.model=model
self.valid_data=valid_data #the validation data I'm getting metrics on
self.true_outputs=true_outputs #the ground truth of my validation data
self.steps=len(self.valid_data)
def on_epoch_end(self, args,*kwargs):
gen=generator(self.valid_data) #generator yielding the validation data
val_predict = (np.asarray(self.model.predict(gen, batch_size=1, verbose=0, steps=self.steps)))
"""
The function from_proba_to_output is used to transform probabilities
into an understandable format by sklearn's f1_score function
"""
val_predict=from_proba_to_output(val_predict, 0.5)
_val_f1 = f1_score(self.true_outputs, val_predict)
print ("val_f1: ", _val_f1, " val_precision: ", _val_precision, " _val_recall: ", _val_recall)
The function from_proba_to_output goes as follows:
def from_proba_to_output(probabilities, threshold):
outputs = np.copy(probabilities)
for i in range(len(outputs)):
if (float(outputs[i])) > threshold:
outputs[i] = int(1)
else:
outputs[i] = int(0)
return np.array(outputs)
I then train my model by referencing this metrics class in the callbacks part of fit_generator. I did not detail the implementation of my train_generator and valid_generator as these data generators are specific to the classification problem at hand and posting them would only bring confusion.
model.fit_generator(
train_generator, epochs=nbr_epochs, verbose=1, validation_data=valid_generator, callbacks=[Metrics(model, valid_data)])
As what #Pedia has said in his comment above, on_epoch_end,as stated in the github.com/fchollet/keras/issues/5400 is the best approach.
I also suggest this work-around
install keras_metrics package by ybubnov
call model.fit(nb_epoch=1, ...) inside a for loop taking advantage of the precision/recall metrics outputted after every epoch
Something like this:
for mini_batch in range(epochs):
model_hist = model.fit(X_train, Y_train, batch_size=batch_size, epochs=1,
verbose=2, validation_data=(X_val, Y_val))
precision = model_hist.history['val_precision'][0]
recall = model_hist.history['val_recall'][0]
f_score = (2.0 * precision * recall) / (precision + recall)
print 'F1-SCORE {}'.format(f_score)