How to do incremental learning on MobileNet-SSD caffe - conv-neural-network

I'm training my classifier on 20k images but every week I get more new pictures so I want to incrementally train my previous model(last stopped iteration) instead of retraining 20k+new_images on all the images again which is a waste of time and compute
I figured out incremental training with Yolo but can't seem to find anything for MobileNet-SSD caffe implemented here https://github.com/chuanqi305/MobileNet-SSD
To understand more about what I'm talking about refer to this:
How to do incremental training on the basis of yolov3.weights & answer to this mention here:
darknet.exe partial cfg/yolov3.cfg yolov3.weights yolov3.conv.105 105

You need to pass previous iteration in train.sh instead of 73000 iteration. The new iteration are found in snapshot folder once you are done training
if ! test -f example/MobileNetSSD_train.prototxt ;then
echo "error: example/MobileNetSSD_train.prototxt does not exist."
echo "please use the gen_model.sh to generate your own model."
exit 1
fi
mkdir -p snapshot
#Initiate a new training
$CAFFE_ROOT/build/tools/caffe train -solver="solver_train.prototxt" \
-weights="mobilenet_iter_73000.caffemodel" \
-gpu 0

Related

Getting rid of the clutter of `.lr_find_` in pytorch lightning?

When using the Lightning’s built-in LR finder:
# Create a Tuner
tuner = Tuner(trainer)
# finds learning rate automatically
# sets hparams.lr or hparams.learning_rate to that learning rate
tuner.lr_find(model)
a lot of checkpoint lr_find_XXX.ckpt are created in the running directory which creates clutter. How can I make sure that these checkpoint are not created? Or keep them in a dedicated directory?
As it is defined in the lr_finder.py as:
# Save initial model, that is loaded after learning rate is found
ckpt_path = os.path.join(trainer.default_root_dir, f".lr_find_{uuid.uuid4()}.ckpt")
trainer.save_checkpoint(ckpt_path)
the initial model is saved with the checkpoint you are mentioning lr_find_XXX.ckpt to the directory trainer.default_root_dir. If no default directory is defined during the initialization of the trainer, current working directory will be assigned as the default_root_dir. After finding the ideal learning rate lr_find restores the initial model from the checkpoint and removes the checkpoint.
# Restore initial state of model
trainer._checkpoint_connector.restore(ckpt_path)
trainer.strategy.remove_checkpoint(ckpt_path)
You are probably stopping the program before the checkpoint is restored and removed so you have two options:
Wait for the ideal learning rate to be found so that the checkpoint is removed
Change the default_root_dir: Trainer(default_root_dir='./NAME_OF_THE_DIR') but be aware that this is also the directory that the lightning logs are saved to.

How can I know how many iterations are left when tuning accross multiple hyperparameters in SparkML?

I'm running a crossvalidation accross a grid of multiple hyperparameters with XgBoost model using Pyspark in Databricks and I would like to know the progress of this operation...So far it has been running for almost 24 hours and I have no idea if it's halfway done or only 10% of the way. I have a 128k combinations of hyperparameters of 5 folds each so a total of 640k runs...
I've tried clicking on MLflow logged run but it's an empty page with an UNFINISHED status. Is there any way to know the progress ?

How are checkpoints created in a custom object detector with tensorflow 2 model zoo?

I've currently been training some models from the tensorflow2 object detection model zoo following the tutorial from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html.
One of my doubts arised when I was looking to export my model from a checkpoint that had the total loss that I was looking for. Nevertheless, I found that I had 16 checkpoints (just the last five of them) but only 1500 steps elapsed. I have to mention that I passed a flag to save a checkpoint every 100 steps. So I'm wondering if:
It creates an initial checkpoint, let's say the checkpoint 0 and if I want to export a model from the 1400th step I should take the 15th checkpoint
or
It creates a "placeholder" for the future checkpoint, i.e. If training process is currently in step 1500 it prepares to store the next checkpoint. I should then take just the 14th checkpoint.
I leave an image of reference,
In this example, I have 16 checkpoints but only 1500 steps have elapsed. I've chosen to save a new checkpoint every 100 steps. If I want to export a new model from the step 1400, should I export the 14th or 15th?
Any help would be much appreciated.

CUDA out of memory when running Bert with Pytorch (Previously worked)

I am building a BERT binary classification on SageMaker using Pytorch.
Previously when I ran the model, I set the Batch size to 16 and the model were able to run successfully. However, yesterday after I stopped SageMaker and restarted the this morning, I can't run the model with Batch size as 16 any more. I am able to run the model with batch size 8.
However, the model is not producing the same result (of course). I didn't change anything else in between. All other settings are the same. (Except I change the SageMaker volume from 30GB to 200GB.)
Does anyone know what may cause this problem? I really want to reproduce the result with batch size 16.
Any answers will help and thank you in advance!

How do we know the model converged while training a neural network for object classification/detection?

I am using Tensorflow's Object Detection API to detect cars. It should detect the cars as one class "car".
I followed sentdex's following series:
https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/
System information:
OS - Ubuntu 18.04 LTS
GPU - Nvidia 940M (VRAM : 2GB)
Tensorflow : 1.10
Python - 3.6
CPU - Intel i5
Problem:
The training process runs pretty fine. In order to know when the model converges and when I should stop training, I observe the loss during the training per step in the terminal where the training is running and also observe the Total Loss graph in Tensorboard via running the following command in another terminal,
$tensorboard --logdit="training"
But even after training till 60k steps, the loss fluctuates between 2.1 to 1.2. If I stop the training and export the inference graph from the last checkpoint(saved in the training/ folder), it detects cars in some cases and in some it gives false positives.
I also tried running eval.py like below,
python3 eval.py --logtostderr --pipeline_config_path=training/ssd_mobilenet_v1_pets.config --checkpoint_dir=training/ --eval_dir=eval/
but it gives out an error that indicates that the GPU memory fails to run this script along with train.py.
So, I stop the training to make sure the GPU is free and then run eval.py but it creates only one eval point in eval/ folder. Why?
Also, how do I understand from the Precision graphs in Tensorboard that the training needs to be stopped?
I could also post screenshots if anyone wants.
Should I keep training till the loss stays on an average around 1?
Thanks.
PS: Added Total Loss graph below till 66k steps.
PS2: After 2 days training(and still on) this is the total loss graph below.
Usually, one uses a separate set of data to measure the error and generalisation abilities of the model. So, one would have the following sets of data to train and evaluate a model:
Training set: The data used to train the model.
Validation set: A separate set of data which will be used to measure the error during training. The data of this set is not used to perform any weight updates.
Test set: This set is used to measure the model's performance after the training.
In your case, you would have to define a separate set of data, the validation set and run an evaluation repeadingly after a fixed number of batches/steps and log the error or accuracy. What usually happens is, that the error on that data will decrease in the beginning and increase at a certain point during training. So it's important to keep track of that error and to generate a checkpoint whenever this error is decreases. The checkpoint with the lowest error on your validation data is one that you want to use. This technique is called Early Stopping.
The reason why the error increases after a certain point during training is called Overfitting. It tells you that the model losses it's ability to generalize to unseen data.
Edit:
Here's an example of a training loop with early stopping procedure:
for step in range(1, _MAX_ITER):
if step % _TEST_ITER == 0:
sample_count = 0
while True:
try:
test_data = sess.run(test_batch)
test_loss, summary = self._model.loss(sess, test_data[0], self._assign_target(test_data), self._merged_summary)
sess.run(self._increment_loss_opt, feed_dict={self._current_loss_pl: test_loss})
sample_count += 1
except tf.errors.OutOfRangeError:
score = sess.run(self._avg_batch_loss, feed_dict={self._batch_count_pl: sample_count})
best_score =sess.run(self._best_loss)
if score < best_score:
'''
Save your model here...
'''

Resources