I am using Keras to build a neural network model:
model_keras = Sequential()
model_keras.add(Dense(4, input_dim=input_num, activation='relu',kernel_regularizer=regularizers.l2(0.01)))
model_keras.add(Dense(1, activation='linear',kernel_regularizer=regularizers.l2(0.01)))
sgd = optimizers.SGD(lr=0.01, clipnorm=0.5)
model_keras.compile(loss='mean_squared_error', optimizer=sgd)
model_keras.fit(X_norm_train, y_norm_train, batch_size=20, epochs=100)
The output looks like below. I am wondering if it is possible to out the loss, say every 10 epochs instead of every epoch? Thanks!
Epoch 1/200
20/20 [==============================] - 0s - loss: 0.2661
Epoch 2/200
20/20 [==============================] - 0s - loss: 0.2625
Epoch 3/200
20/20 [==============================] - 0s - loss: 0.2590
Epoch 4/200
20/20 [==============================] - 0s - loss: 0.2556
Epoch 5/200
20/20 [==============================] - 0s - loss: 0.2523
Epoch 6/200
20/20 [==============================] - 0s - loss: 0.2490
Epoch 7/200
20/20 [==============================] - 0s - loss: 0.2458
Epoch 8/200
20/20 [==============================] - 0s - loss: 0.2427
Epoch 9/200
20/20 [==============================] - 0s - loss: 0.2397
Epoch 10/200
20/20 [==============================] - 0s - loss: 0.2367
Epoch 11/200
20/20 [==============================] - 0s - loss: 0.2338
Epoch 12/200
20/20 [==============================] - 0s - loss: 0.2309
Epoch 13/200
20/20 [==============================] - 0s - loss: 0.2281
Epoch 14/200
20/20 [==============================] - 0s - loss: 0.2254
Epoch 15/200
20/20 [==============================] - 0s - loss: 0.2228
:
It is not possible to reduce frequency of logging to stdout, however, passing verbose=0 argument to fit() method would turn logging completely off.
Since the loop over epochs is not exposed in the Keras' sequential model, one way to collect scalar variable summaries with a custom frequency would be using Keras callbacks. In particular, you could use TensorBoard (assuming you are running with tensorflow backend) or CSVLogger (any backend) callbacks to collect any scalar variable summaries (training loss, in your case):
from keras.callbacks import TensorBoard
model_keras = Sequential()
model_keras.add(Dense(4, input_dim=input_num, activation='relu',kernel_regularizer=regularizers.l2(0.01)))
model_keras.add(Dense(1, activation='linear',kernel_regularizer=regularizers.l2(0.01)))
sgd = optimizers.SGD(lr=0.01, clipnorm=0.5)
model_keras.compile(loss='mean_squared_error', optimizer=sgd)
TB = TensorBoard(histogram_freq=10, batch_size=20)
model_keras.fit(X_norm_train, y_norm_train, batch_size=20, epochs=100, callbacks=[TB])
Setting histogram_freq=10 will save loss every 10 epochs.
EDIT: passing validation_data=(...) to the fit method will also allow to check validation level metrics.
Create a Keras callback to reduce the number of log lines. By default, Keras print log per every epoch. The following code prints only 10 log lines regardless the number of epochs.
class callback(tf.keras.callbacks.Callback):
def on_epoch_end(this,Epoch,Logs):
L = Logs["loss"];
if Epoch%Lafte==Lafte-1: #Log after a number of epochs
print(f"Average batch loss: {L:.9f}");
if Epoch==Epochs-1:
print(f"Fin-avg batch loss: {L:.9f}"); #Final average
Model = model();
Model.compile(...);
Dsize = ... #Number of samples in training data
Bsize = ... #Number of samples to process in 1 batch
Steps = 1000; #Number of batches to use to train
Epochs = round(Steps/(Dsize/Bsize));
Lafte = round(Epochs/10); #Log 10 times only, regardless of num of Epochs
if Lafte==0: Lafte=1; #Avoid modulus by zero in on_epoch_end
Model.fit(Data, epochs=Epochs, steps_per_epoch=round(Dsize/Bsize),
callbacks=[callback()], verbose=0);
Related
I am training a model with the following code
model=Sequential()
model.add(Dense(100, activation='relu',input_shape=(n_cols,)))
model.add(Dense(100, activation='relu'))
model.add(Dense(2,activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
early_stopping_monitor = EarlyStopping(patience=3)
model.fit(X_train_np,target,validation_split=0.3, epochs=100, callbacks=[early_stopping_monitor])
This is designed to stop the training if the val_loss: parameter does not improve after 3 epochs. The result is shown below. My question is will the model stop with weights of epoch 8 or 7. Because the performance got bad in epoch 8 so it stopped. But the model went ahead by 1 epoch with a bad performing parameter as earlier one (epoch 7) was better. Do I need to retrain the model now with 7 epochs?
Train on 623 samples, validate on 268 samples
Epoch 1/100
623/623 [==============================] - 1s 1ms/step - loss: 4.0365 - accuracy: 0.5923 - val_loss: 1.2208 - val_accuracy: 0.6231
Epoch 2/100
623/623 [==============================] - 0s 114us/step - loss: 1.4412 - accuracy: 0.6356 - val_loss: 0.7193 - val_accuracy: 0.7015
Epoch 3/100
623/623 [==============================] - 0s 103us/step - loss: 1.4335 - accuracy: 0.6260 - val_loss: 1.3778 - val_accuracy: 0.7201
Epoch 4/100
623/623 [==============================] - 0s 106us/step - loss: 3.5732 - accuracy: 0.6324 - val_loss: 2.7310 - val_accuracy: 0.6194
Epoch 5/100
623/623 [==============================] - 0s 111us/step - loss: 1.3116 - accuracy: 0.6372 - val_loss: 0.5952 - val_accuracy: 0.7351
Epoch 6/100
623/623 [==============================] - 0s 98us/step - loss: 0.9357 - accuracy: 0.6645 - val_loss: 0.8047 - val_accuracy: 0.6828
Epoch 7/100
623/623 [==============================] - 0s 105us/step - loss: 0.7671 - accuracy: 0.6934 - val_loss: 0.9918 - val_accuracy: 0.6679
Epoch 8/100
623/623 [==============================] - 0s 126us/step - loss: 2.2968 - accuracy: 0.6629 - val_loss: 1.7789 - val_accuracy: 0.7425
Use restore_best_weights with monitor value set to target quantity. So, the best weights will be restored after training automatically.
early_stopping_monitor = EarlyStopping(patience=3,
monitor='val_loss', # assuming it's val_loss
restore_best_weights=True )
From docs:
restore_best_weights: whether to restore model weights from the epoch with the best value of the monitored quantity ('val_loss' here). If False, the model weights obtained at the last step of training are used (default False).
Docmentation link
All the code that I have placed is in TensorFlow 2.0
file path: Is a string that can have formatting options such as the epoch number. For example the following is a common filepath (weights.{epoch:02d}-{val_loss:.2f}.hdf5)
monitor: (typically it is‘val_loss’or ‘val_accuracy’)
mode: Should it be minimizing or maximizing the monitor value
(typically either ‘min’ or ‘max’)
save_best_only: If this is set to true then it will only save the
model for the current epoch, if it’s metric values, is better than
what has gone before. However, if you set save_best_only to
false it will save every model after each epoch (regardless of
whether that model was better than previous models or not).
Code
model=Sequential()
model.add(Dense(100, activation='relu',input_shape=(n_cols,)))
model.add(Dense(100, activation='relu'))
model.add(Dense(2,activation='softmax'))
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
fname = "weights.{epoch:02d}-{val_loss:.2f}.hdf5"
checkpoint = tf.keras.callbacks.ModelCheckpoint(fname, monitor="val_loss",mode="min", save_best_only=True, verbose=1)
model.fit(X_train_np,target,validation_split=0.3, epochs=100, callbacks=[checkpoint])
I'm trying to train a LSTM model to predict the temperature.but the model only got trained in first epochs.
I got the usage and temperature of cpu from a server in about twenty hours as the dataset.I want to predict the temperature of cpu after 10m by using 10m's data before.so I reshape my dataset to (1301,10,2) as I have 1301 samples,10m timesteps and 2 features, then I divide it to 1201 and 100 as the train dataset and the validation dataset.
I check the dataset manually,so it should be right.
I creat the LSTM model as below
model = Sequential()
model.add(LSTM(10, activation="relu", input_shape=(train_x.shape[1], train_x.shape[2]),return_sequences=True))
model.add(Flatten())
model.add(Dense(1, activation="softmax"))
model.compile(loss='mean_absolute_error', optimizer='RMSprop')
and try to fit it
model.fit(train_x, train_y, epochs=50, batch_size=32, validation_data=(test_x, test_y), verbose=2)
I got the log like this:
Epoch 1/50
- 1s - loss: 0.8016 - val_loss: 0.8147
Epoch 2/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 3/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 4/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 5/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 6/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 7/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 8/50
- 0s - loss: 0.8016 - val_loss: 0.8147
Epoch 9/50
- 0s - loss: 0.8016 - val_loss: 0.8147
The trainning time of each epoch is 0 expect the first epoch,and the loss never decrease.I tried changing the number of LSTM cells,loss function and optimizer,but it still don't work.
Changing the activation function of last layer from softmax to sigmoid make the model works.Thanks to #giser_yugang #Ashwin Geet D'Sa
I'm just trying to play around with Keras, but I'm running into some trouble trying to teach it a basic function (multiply by two). My setup is as follows. Since I'm new to this, I added in comments what I believe to be happening at each step.
x_train = np.linspace(1,1000,1000)
y_train=x_train*2
model = Sequential()
model.add(Dense(32, input_dim=1, activation='sigmoid')) #add a 32-node layer
model.add(Dense(32, activation='sigmoid')) #add a second 32-node layer
model.add(Dense(1, activation='sigmoid')) #add a final output layer
model.compile(loss='mse',
optimizer='rmsprop') #compile it with loss being mean squared error
model.fit(x_train,y_train, epochs = 10, batch_size=100) #train
score = model.evaluate(x_train,y_train,batch_size=100)
print(score)
I get the following output:
1000/1000 [==============================] - 0s 355us/step - loss: 1334274.0375
Epoch 2/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333999.8250
Epoch 3/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333813.4062
Epoch 4/10
1000/1000 [==============================] - 0s 28us/step - loss: 1333679.2625
Epoch 5/10
1000/1000 [==============================] - 0s 27us/step - loss: 1333591.6750
Epoch 6/10
1000/1000 [==============================] - 0s 51us/step - loss: 1333522.0000
Epoch 7/10
1000/1000 [==============================] - 0s 23us/step - loss: 1333473.7000
Epoch 8/10
1000/1000 [==============================] - 0s 24us/step - loss: 1333440.6000
Epoch 9/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333412.0250
Epoch 10/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333390.5000
1000/1000 [==============================] - 0s 66us/step
['loss']
1333383.1143554687
It seems like the loss is extremely high for this basic function, and I'm confused why it's not able to learn it. Am I confused, or have I done something wrong?
Using a sigmoid activation constrains your output to the range [0, 1]. But your target output is in the range [0, 2000], so your network cannot learn. Try a relu activation instead.
Try using adam rather than rmsprop when debugging, it almost always works better.
Train longer.
Putting it all together, I get the following output:
Epoch 860/1000
1000/1000 [==============================] - 0s 29us/step - loss: 5.1868e-08
I've just started using Keras. The sample I'm working on has a model and the following snippet is used to run the model
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
y_one_hot = label_binarizer.fit_transform(y_train)
model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, nb_epoch=3, validation_split=0.2)
I get the following response:
Using TensorFlow backend. Train on 80 samples, validate on 20 samples Epoch 1/3
32/80 [===========>..................] - ETA: 0s - loss: 1.5831 - acc:
0.4062 80/80 [==============================] - 0s - loss: 1.3927 - acc:
0.4500 - val_loss: 0.7802 - val_acc: 0.8500 Epoch 2/3
32/80 [===========>..................] - ETA: 0s - loss: 0.9300 - acc:
0.7500 80/80 [==============================] - 0s - loss: 0.8490 - acc:
0.8000 - val_loss: 0.5772 - val_acc: 0.8500 Epoch 3/3
32/80 [===========>..................] - ETA: 0s - loss: 0.6397 - acc:
0.8750 64/80 [=======================>......] - ETA: 0s - loss: 0.6867 - acc:
0.7969 80/80 [==============================] - 0s - loss: 0.6638 - acc:
0.8000 - val_loss: 0.4294 - val_acc: 0.8500
The documentation says that fit returns
A History instance. Its history attribute contains all information
collected during training.
Does anyone know how to interpret the history instance?
For example, what does 32/80 mean? I assume 80 is the number of samples but what is 32? ETA: 0s ??
ETA = Estimated Time of Arrival.
80 is the size of your training set, 32/80 and 64/80 mean that your batch size is 32 and currently the first batch (or the second batch respectively) is being processed.
loss and acc refer to the current loss and accuracy of the training set.
At the end of each epoch your trained NN is evaluated against your validation set. This is what val_loss and val_acc refer to.
The history object returned by model.fit() is a simple class with some fields, e.g. a reference to the model, a params dict and, most importantly, a history dict. It stores the values of loss and acc (or any other used metric) at the end of each epoch. For 2 epochs it will look like this:
{
'val_loss': [16.11809539794922, 14.12947562917035],
'val_acc': [0.0, 0.0],
'loss': [14.890108108520508, 12.088571548461914],
'acc': [0.0, 0.25]
}
This comes in very handy if you want to visualize your training progress.
Note: if your validation loss/accuracy starts increasing while your training loss/accuracy is still decreasing, this is an indicator of overfitting.
Note 2: at the very end you should test your NN against some test set that is different from you training set and validation set and thus has never been touched during the training process.
32 is your batch size. 32 is the default value that you can change in your fit function if you wish to do so.
After the first batch is trained Keras estimates the training duration (ETA: estimated time of arrival) of one epoch which is equivalent to one round of training with all your samples.
In addition to that you get the losses (the difference between prediction and true labels) and your metric (in your case the accuracy) for both the training and the validation samples.
I am using HDF5Matrix to load a dataset and train my model with it. In the first epoch I obtain about 10% of accuracy.
At the moment, my dataset is not very large, so I can copy the contents of the HDF5Matrix to a numpy array and train with it. I reinitialise the model, and this time, in the first epoch I obtain a 40% accuracy.
For more information about the HDF5Matrix, see this example.
I understand in the fit method, the parameters shuffle must be either False or 'batch'. I get the same behaviour either way.
Does anybody have the same problem? Could you tell me if there is something I am doing wrong?
This is a snippet of the code:
using HDF5Matrix
from keras.utils.io_utils import HDF5Matrix
x_train = HDF5Matrix('../data/default_data.h5', 'data')
y_train = HDF5Matrix('../data/default_data.h5', 'labels') # create the model ...
# train the model
model.fit(x_train, y_train, epochs=200, batch_size=2048, shuffle='batch') # which outputs:
Epoch 1/200
1758510/1758510 [==============================] - 42s - loss: 2.5574 - categorical_accuracy: 0.1032
Epoch 2/200
1758510/1758510 [==============================] - 41s - loss: 2.3145 - categorical_accuracy: 0.1553
Epoch 3/200
1758510/1758510 [==============================] - 41s - loss: 2.1931 - categorical_accuracy: 0.2067
Epoch 4/200
694272/1758510 [==========>...................] - ETA: 24s - loss: 2.1055 - categorical_accuracy: 0.2328
Using numpy array
# create the model again
...
# copy the HDF5Matrix to a numpy array
X_training = x_train[0:1758510]
Y_training = y_train[0:1758510]
# check X_training is equal to x_train
...
# train the model again
model.fit(X_training,
Y_training,
epochs=200,
batch_size=256,
shuffle=True)
# which outputs
Epoch 1/200
1758510/1758510 [==============================] - 27s - loss: 1.5019 - categorical_accuracy: 0.4710
Epoch 2/200
89600/1758510 [>.............................] - ETA: 26s - loss: 1.2786 - categorical_accuracy: 0.5523
Thank you very much