Tensorflow model detecting wrong objects? - python-3.x

I am new to Tensorflow. I trained a model with 900 images of shoes. I put 20%(180 images) into test folder and 80%(720 images) into train folder. But after training, my trained model is detecting other objects also as shoes. Attached below is the screen shot of the prediction by my model.
Question1: Can anyone can help me please, where am i going wrong?
I am training this model on MAC Machine, and using faster_rcnn_inception_v2_coco_2018_01_28 model to train my model.
I follow the link to train model:
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10/tree/d1c5b59803543e48362c27c48d704d4b0d92d135
Question 2: And more thing when i run this model on webcam it is very slow why?
Thanks in advance...
Screenshots
Please check this Image detecting wrong object as shoe
Please check this image detecting shoe

Related

Pytorch based Bert NER for transfer learning/retraining

I trained an Bert-based NER model using Pytorch framework by referring the below article.
https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/.
After training the model using this approach, I saved that model using torch.save() method. Now, I want to retrain the model with new dataset.
Can someone please help me on how to perform retraining/transfer learning as I'm new to NLP and transformers.
Thanks in advance.
First, you can read the doc on pytorch loading, the model that can be very helpful to retrain the save model in a new dataset. I will provide the example of loading a save model. That can be very helpful to you
Original Doc :- https://pytorch.org/tutorials/beginner/saving_loading_models.html
Example code :- https://pythonguides.com/pytorch-load-model/
This two link are very helpful to train the new dataset on save model

Does training a tflite model require images annotated?

I am trying to implement TFLite model for food detection and segmentation. This is the model i chose suitable for my food images dataset: [https://tfhub.dev/s?deployment-format=lite&q=inception%20resnet%20v2].
I searched over google to understand how the images are required to be annotated, but only end up in confusion. I understand the dataset is converted to TFRecords and then fed to the pretrained model. But for training the model with custom dataset, does not it require an annotation file? I dont see any info about this on TF hub either.
Please can anyone help me on this!
The answer to your question is depends on what model do you plan to train.
In the case of a model for food detection and segmentation you do need annotations when training. If you do not provide the model with labeled training data as it is a supervised learning model it cannot learn from them.
If you were to train an autoencoder the data does not need to be annotated. Hope the keywords used in this answer help you out search for more information about the topic.

While implementing yolo in darknet, should we train on image net data set?

I have installed darknet in ubuntu and is now trying to implement object detection using yolo v2 on my custom dataset. In the yolo paper, they have told that they have pretrained the network using image net dataset. So, my question is should we also pretrain the network?
Sorry if I'm blunting.
Can someone reply me?
For most cases, if your dataset has lots of similar feature in the pre-trained weight (e.g. person, car), you should use the pre-trained network such as darknet53.conv.74 or darknet19_448.conv.23.
But you can also train the network without using those pre-trained network (training from scratch), for example by removing the weight from the command :
./darknet detector train data/obj.data yolo-obj.cfg

Retrain a saved model in Keras that was trained using train_on_batch()

I am working on GANS and I need to save the model after my working hours. And then I have to retrain that previously saved model again where it was saved. I am saving these three models to continue training later on.
Discriminator Model.h5
Generator Model.h5
Generator-on-Discriminator Model.h5
For these models, I am using perceptual loss and Wasserstein loss. But when I load_model to retrain that saved model again it encounters the following error.
Unknown loss function:wasserstein_loss
I have also tried Discriminator.compile(loss=Wasserstein loss) but this still not solving my issue. Can anyone of you please guide me over this and can tell me wither its possible to retrain a saved model using train_on_batch().
solved at my own
Defining custom_objects={'wassertein_loss':wassertein_loss} along with path while loading the model solved my issue. i.e.
Discriminator=load_model(model_path, custom_objects={'wassertein_loss':wassertein_loss} )

Retrain object detection model with own images (tensorflow)

Good morning,
I have been working with the tensorflow object detection tutorial using the ssd_mobilenet they are providing as a frozen graph as well as with the corresponding checkpoint files (model.ckpt.data-00000-of-00001, model.ckpt.index, model.ckpt.meta).
However, as the images are sometimes badly recognized, I hoped I could feed own images to the detection model and improve its performance for my images, that are all taken by the same camera.
Google could not help me where to start. The questions I am having:
- Are there any code snippets that show which of those files to load and how to train the existing model?
- Do I need to retrain the loaded model with the old data (i.e. COCO) + the new data (my images) or can I just retrain it using my data and the model remembers what it has learned before?
Sorry for this very unspecific questions, but I just can not figure out where to start.
There is a great walkthrough blog and code base written by Dat Tran. He trained a model to recognize Raccoons in images using the pre-trained SSD_mobilenet as a start. This is the best place I found to start. Hope this helps.

Resources