Use models from Keras Applications without pretrained weights - keras

Keras Applications provide implementations of some of the most popular model architectures with weights pretrained on some of the most popular datasets. These predefined models are very handy for transfer learning of problems which are similar to the datasets the models were trained on.
But what if I have a very different problem and want to completely train the models on the new dataset? How can I use the models in Applications for training from scratch based on my own dataset, if I dont have pretrained weights?

You can assign a None to the weights variable, for instance with the inception V3 architecture.
keras.applications.inception_v3.InceptionV3(include_top=False, weights='None', input_shape=input_shape = (img_width, img_height, 3))
include_top=False will allow you to train the top layer with your custom network.
weights='None' means that we are training without any weights if you want to train using imagenet weight you set it to weights='imagenet'

Related

Backpropagation in bert

i would like to know when people say pretrained bert model, is it only the final classification neural network is trained
Or
Is there any update inside transformer through back propagation along with classification neural network
During pre-training, there is a complete training if the model (updation of weights). Moreover, BERT is trained on Masked Language Model objective and not classification objective.
In pre-training, you usually train a model with huge amount of generic data. Thus, it has to be fine-tuned with the task-specific data and task-specific objective.
So, if your task is classification on a dataset X. You fine-tune BERT accordingly. And now, you will be adding a task-specific layer (classification layer, in BERT they have used dense layer over [CLS] token). While fine-tuning, you update the pre-trained model weights as well as the new task-specific layer.

How to pick the pre-trained weights of a network up to a certain layer using keras?

Let's suppose we want to use in our model the pre-trained weights of VGG16 up to the layer before the third max pooling and then add the layers of our choice, how could we make this happen?
VGG16 architecture overview
You can to create a new model with say, base_model (VGG model with loaded weights and the unwanted layers 'pop()'ped). Then add VGG and other layers of your choice to the empty sequential model

CNN with CTC loss

I want to extract features using a pretrained CNN model(ResNet50, VGG, etc) and use the features with a CTC loss function.
I want to build it as a text recognition model.
Anyone on how can i achieve this ?
I'm not sure if you are looking to finetune the pretrained models or to use the models for feature extraction. To do the latter freeze the petrained model weights (there are several ways to do this in PyTorch, the simplest being calling .eval() on the model), and feed the logits from the last layer of the model to your new output head. See the PyTorch tutorial here for a more in depth guide.

How to use a pre-trained object detection in tensorflow?

How can I use the weights of a pre-trained network in my tensorflow project?
I know some theory information about this but no information about coding in tensorflow.
As been pointed out by #Matias Valdenegro in the comments, your first question does not make sense. For your second question however, there are multiple ways to do so. The term that you're searching for is Transfer Learning (TL). TL means transferring the "knowledge" (basically it's just the weights) from a pre-trained model into your model. Now there are several types of TL.
1) You transfer the entire weights from a pre-trained model into your model and use that as a starting point to train your network.
This is done in a situation where you now have extra data to train your model but you don't want to start over the training again. Therefore you just load the weights from your previous model and resume the training.
2) You transfer only some of the weights from a pre-trained model into your new model.
This is done in a situation where you have a model trained to classify between, say, 5 classes of objects. Now, you want to add/remove a class. You don't have to re-train the whole network from the start if the new class that you're adding has somewhat similar features with (an) existing class(es). Therefore, you build another model with the same exact architecture as your previous model except the fully-connected layers where now you have different output size. In this case, you'll want to load the weights of the convolutional layers from the previous model and freeze them while only re-train the fully-connected layers.
To perform these in Tensorflow,
1) The first type of TL can be performed by creating a model with the same exact architecture as the previous model and simply loading the model using tf.train.Saver().restore() module and continue the training.
2) The second type of TL can be performed by creating a model with the same exact architecture for the parts where you want to retain the weights and then specify the name of the weights in which you want to load from the previous pre-trained weights. You can use the parameter "trainable=False" to prevent Tensorflow from updating them.
I hope this helps.

Is there a way to create and train a model without transfer learning using tensorflow object-detection api?

I'm using faster_rcnn_resnet50 to train a model which will detect corrosions in images and I want to train a model from scratch instead of using transfer learning.
I don't know if this right but the reason I want to do this is that the already existing weights (which are trained on COCO) will affect my model trained on corrosion images.
One way I would like to do this is randomize or unfreeze the weights of the feature extractor on the resnet50 and then train the model on my images.
but there's no function or an option in the resnet50 config file to randomize or unfreeze weights.
I've made a new labelmap with a single label and tried it with transfer learning. It's working but I would like to have a model is trained just on my images and the previous weights shouldn't affect my predictions.
This is the first time I'm working with object detection and transfer learning. Will the weights of the pre-trained model on COCO affect my model which is trained on custom images of corrosion? How do you use tensorflow object-detection API without transfer learning?

Resources