why is my training output printing irregularly? - python-3.x

I am training my CNN using the following code:
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(train_data_gen.n / float(batch_size))),
epochs=num_epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(val_data_gen.n / float(batch_size))),
verbose=1,
)
and these are the output:
The accuracy is okay but why is epoch 1/20 repeating itself multiple times?

Your steps per epoch is not managed well. You are trying to validate twice too..
Try without the float and int:
history = model.fit_generator(
train_data_gen,
steps_per_epoch=np.ceil(train_data_gen.n / batch_size),
epochs=num_epochs,
validation_data=val_data_gen,
validation_steps=np.ceil(val_data_gen.n / batch_size),
verbose=1,
)
print out the values of generator size, the steps_per_epoch should reflect number of training steps (usually determined by generator function)
And make sure train_data_gen.n = total_training_samples and same for val_data_gen.n = total_validation_samples
OR, just comment out steps_per_epoch and validation_steps and the model will take on the generator length as training steps.
Consult: https://keras.io/models/model/#fit_generator

Related

Keras EarlyStopping settings

I have built a common Unet to train my own dataset by using Keras. I have set the EarlyStopping option as follows. However, during training, it keeps prompt out the precision value dis not change but in the next line, it is apparently changing. Has anyone meet this problem before or know how to solve the problem?
enter image description here
train_iterator = create_one_shot_iterator(train_files, batch_size=train_batch_size, num_epoch=epochs)
train_images, train_masks = train_iterator.get_next()
train_images, train_masks = augment_dataset(train_images, train_masks,
augment=True,
resize=True,
scale=1 / 255.,
hue_delta=0.1,
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.1,
rotate=15)
val_iterator = create_initializable_iterator(val_files, batch_size=val_batch_size)
val_images, val_masks = val_iterator.get_next()
val_images, val_masks = augment_dataset(val_images, val_masks,
augment=True,
resize=True,
scale=1 / 255.,
)
model_input = tf.keras.layers.Input(tensor=train_images)
model_output = Unet.u_net_256(model_input)
# Model definition
model = models.Model(inputs=model_input, outputs=model_output)
precision = tf.keras.metrics.Precision()
model.compile(optimizer='adam',
loss=bce_dice_loss,
metrics=[precision],
target_tensors=[train_masks])
model.summary()
cp = [tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(hdf5_dir, class_name) + '.hdf5',
monitor='val_precision',
save_best_only=True,
verbose=1),
tf.keras.callbacks.TensorBoard(log_dir=log_dir,
write_graph=True,
wr`enter code here`ite_images=True),
tf.keras.callbacks.EarlyStopping(monitor='val_precision', patience=10, verbose=2, mode='max')]
History = model.fit(train_images, train_masks,
steps_per_epoch=int(np.ceil(num_train_samples / float(train_batch_size))),
epochs=epochs,
validation_data=(val_images, val_masks),
validation_steps=int(np.ceil(num_val_samples / float(val_batch_size))),
callbacks=cp,
)
The feedback message you are getting is letting you know that for the epoch just completed, no improvement in validation precision occurred. This is probably happening because you have set verbose=2 in the callback settings, which is intended to give you a heads up that if you see the message for 10 consecutive epochs, your training will end.

No effect of batch_size on number of iterations in model.fit in keras

I have a simple model for demonstration:
input_layer = Input(shape=(100,))
encoded = Dense(2, activation='relu')(input_layer)
X = np.ones((1000, 100))
Y = np.ones((1000, 2))
print(X.shape)
model = Model(input_layer, encoded)
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x=X, y=Y, batch_size = 2)
Output is:
2.2.4
(1000, 100)
Epoch 1/1
1000/1000 [==============================] - 3s 3ms/step - loss: 1.3864
Why there are 1000 iterations in one epoch(as shown in the output).
I tried changing this but does not changes the output. I guess it should have been 1000/2 = 500. Please explain what is wrong with my understanding and how can i set the batch size appropriately.
Thanks
In model.fit the numbers in the left part of the progress bar count samples, so it is always the current samples / total number of samples.
Maybe you are confused because it works different in model.fit_generator. There you actually see iterations or batches being counted.
It changes the batch size, the bar progresses faster although you do not explicitly see it as a step. I had the same question in my mind some time ago.
If you want to explicitly see each step, you can use steps_per_epoch and validation_steps.
An example is listed below.
model.fit_generator(training_generator,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_steps)
In this case, steps_per_epoch = number_of_training_samples / batch_size, while validation_steps = number_of_training_samples / batch_size.
During the training, you will see 500 steps instead of 1000 (provided that you have 1000 training samples and your batch_size is 2).

Emotion detection on text

I am a newbie in ML and was experimenting with emotion detection on the text.
So I have an ISEAR dataset which contains tweets with their emotion labeled.
So my current accuracy is 63% and I want to increase to at least 70% or even more maybe.
Heres the code :
inputs = Input(shape=(MAX_LENGTH, ))
embedding_layer = Embedding(vocab_size,
64,
input_length=MAX_LENGTH)(inputs)
# x = Flatten()(embedding_layer)
x = LSTM(32, input_shape=(32, 32))(embedding_layer)
x = Dense(10, activation='relu')(x)
predictions = Dense(num_class, activation='softmax')(x)
model = Model(inputs=[inputs], outputs=predictions)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['acc'])
model.summary()
filepath="weights-simple.hdf5"
checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
history = model.fit([X_train], batch_size=64, y=to_categorical(y_train), verbose=1, validation_split=0.1,
shuffle=True, epochs=10, callbacks=[checkpointer])
That's a pretty general question, optimizing the performance of a neural network may require tuning many factors.
For instance:
The optimizer chosen: in NLP tasks rmsprop is also a popular
optimizer
Tweaking the learning rate
Regularization - e.g dropout, recurrent_dropout, batch norm. This may help the model to generalize better
More units in the LSTM
More dimensions in the embedding
You can try grid search, e.g. using different optimizers and evaluate on a validation set.
The data may also need some tweaking, such as:
Text normalization - better representation of the tweets - remove unnecessary tokens (#, #)
Shuffle the data before the fit - keras validation_split creates a validation set using the last data records
There is no simple answer to your question.

Keras - steps_per_epoch calculation not matching with the ImageDataGenerator output

I am working on a basic Classification task with Keras and I seem to have stumbled upon a problem where I need some assistance.
I have 200 samples for training and a 100 for validation, I intend to use a ImageDataGenerator to increase the number of training samples for my task. I want to make sure of the total number of training images that are passed to the fit_generator().
I know that the steps_per_epoch defines the total number of batches we get from a generator and ideally it should be number of samples divided by the batch size.
However, this is where things do not add up for me. Here is a snippet of my code:
num_samples = 200
batch_size = 10
gen = ImageDataGenerator(horizontal_flip = True,
vertical_flip = True,
width_shift_range = 0.1,
height_shift_range = 0.1,
zoom_range = 0.1,
rotation_range = 10
)
x,y = shuffle(img_data,img_label, random_state=2)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.333, random_state=2)
generator = gen.flow(x_train, y_train, save_to_dir='check_images/sample_run')
new_network.fit_generator(generator, steps_per_epoch=len(x_train)/batch_size, validation_data=(x_test, y_test), epochs=1, verbose=2)
I am saving the augmented images to see how the images turn out from the ImageDataGenerator and also to ascertain the number of images that are generated from it.
After running this code for a single epoch, I get 600 images in my directory, a number which I cannot arrive at, or maybe I am making a mistake.
Any assistance in making me understand the calculation in this code would be deeply appreciated. Has anyone come across similar problems ?
TIA
gen.flow() creates a NumpyArrayIterator internally and that in turn uses Iterator to calculate the steps_per_epoch. Ideally if steps_per_epoch is None, then the calculation is done as steps_per_epoch = (x.shape[0] + batch_size - 1) // batch_size which is approximately same as your calculation.
Not sure why you see more number of samples. Could you compute the x.shape[0] and double check if your code is as per what you explained?

Is it logical to loop on model.fit in Keras?

Is it logical to do as below in Keras in order not to run out of memory?
for path in ['xaa', 'xab', 'xac', 'xad']:
x_train, y_train = prepare_data(path)
model.fit(x_train, y_train, batch_size=50, epochs=20, shuffle=True)
model.save('model')
It is, but prefer model.train_on_batch if each iteration is generating a single batch. This eliminates some overhead that comes with fit.
You can also try to create a generator and use model.fit_generator():
def dataGenerator(pathes, batch_size):
while True: #generators for keras must be infinite
for path in pathes:
x_train, y_train = prepare_data(path)
totalSamps = x_train.shape[0]
batches = totalSamps // batch_size
if totalSamps % batch_size > 0:
batches+=1
for batch in range(batches):
section = slice(batch*batch_size,(batch+1)*batch_size)
yield (x_train[section], y_train[section])
Create and use:
gen = dataGenerator(['xaa', 'xab', 'xac', 'xad'], 50)
model.fit_generator(gen,
steps_per_epoch = expectedTotalNumberOfYieldsForOneEpoch
epochs = epochs)
I would suggest having a look at this thread on Github.
You could indeed consider using model.fit(), but it would make the training more stable to do it in such a way:
for epoch in range(20):
for path in ['xaa', 'xab', 'xac', 'xad']:
x_train, y_train = prepare_data(path)
model.fit(x_train, y_train, batch_size=50, epochs=epoch+1, initial_epoch=epoch, shuffle=True)
This way you are iterating over all your data once per epoch, and not iterating 20 epochs over part of your data before switching.
As discussed in the thread, another solution would be to develop your own data generator and use it with model.fit_generator().

Resources