Pretrained model for Biomedical Image Segmentation - keras

I would like to use pre-trained model (in encoder part) for Biomedical Image segmentation with Unet architecture.
My question is which pre-trained model should I use?
I tried VGG16 and VGG19 with the following different options but I could not get an improvement:
1- I froze all layers of both models and the rest of the layers are trainable
2- I froze first 16 layers then first 11 layers, the rest of the layers are trainable
3 - I made all layers trainable.

Related

I make Graph embedding using Gensim Doc2vec and then binary classification with 2 layers deep neural network in keras

after making the graph embedding with Doc2vec, I want to make classification with keras, do I have to make embedding layer and put it as input to neural network or I directly use the embedding and split it into training and testing? also did the embedding layer improves the accuracy of neural network or not

How to use BERT pre-trained model in Keras Embedding layer

How do I use a pre-trained BERT model like bert-base-uncased as weights in the Embedding layer in Keras?
Currently, I am generating word embddings using BERT model and it takes a lot of time. And I am assigning those weights like in the cide shown below
model.add(Embedding(307200, 1536, input_length=1536, weights=[embeddings]))
I searched on internet but the method is given in PyTorch. I need to do it in Keras. Please help.

How to pick the pre-trained weights of a network up to a certain layer using keras?

Let's suppose we want to use in our model the pre-trained weights of VGG16 up to the layer before the third max pooling and then add the layers of our choice, how could we make this happen?
VGG16 architecture overview
You can to create a new model with say, base_model (VGG model with loaded weights and the unwanted layers 'pop()'ped). Then add VGG and other layers of your choice to the empty sequential model

Use models from Keras Applications without pretrained weights

Keras Applications provide implementations of some of the most popular model architectures with weights pretrained on some of the most popular datasets. These predefined models are very handy for transfer learning of problems which are similar to the datasets the models were trained on.
But what if I have a very different problem and want to completely train the models on the new dataset? How can I use the models in Applications for training from scratch based on my own dataset, if I dont have pretrained weights?
You can assign a None to the weights variable, for instance with the inception V3 architecture.
keras.applications.inception_v3.InceptionV3(include_top=False, weights='None', input_shape=input_shape = (img_width, img_height, 3))
include_top=False will allow you to train the top layer with your custom network.
weights='None' means that we are training without any weights if you want to train using imagenet weight you set it to weights='imagenet'

VGG16 trained on grayscale imagenet

I have found the VGG16 network pre-trained on the (color) imagenet database (as .npy). Is there a VGG16 network pre-trained on a gray-scale version of the imagenet database available?
(The usual 'tricks' for using the 3-channel filters of the conv1.1 layer on the gray 1-channel input are not enough for me. I am looking at incremental improvements of the network performance, so I need to see how the transfer learning behaves when the pre-trained model was 'looking' at gray-scale input).
Thanks!
Yes, there's this one:
https://github.com/DaveRichmond-/grayscale-imagenet
Greyscale imagenet trained model, and also a version of it that's finetuned on X-rays. They showed that Imagenet performance barely drops btw.
#GrimSqueaker gave you the code of this paper : https://openaccess.thecvf.com/content_eccv_2018_workshops/w33/html/Xie_Pre-training_on_Grayscale_ImageNet_Improves_Medical_Image_Classification_ECCVW_2018_paper.html
However, the model trained in it is Inception v3 not VGG16.
You have two options:
Use a colored pre-trained VGG16 model and duplicate one channel to the three channels
Train your VGG16 model on the ImageNet grayscaled dataset.
You may find this link useful:
https://github.com/zzangho/VGG16_grayscale

Resources