Is BatchNormalization can be used before adding conV layer? - conv-neural-network

Is BatchNormalization can be used in this way, "layers.BatchNormalization(input_shape= image_shape)"? N:B: Before adding conV layer.
I am trying to build a CNN architecture for image classification.

Related

I make Graph embedding using Gensim Doc2vec and then binary classification with 2 layers deep neural network in keras

after making the graph embedding with Doc2vec, I want to make classification with keras, do I have to make embedding layer and put it as input to neural network or I directly use the embedding and split it into training and testing? also did the embedding layer improves the accuracy of neural network or not

How to pick the pre-trained weights of a network up to a certain layer using keras?

Let's suppose we want to use in our model the pre-trained weights of VGG16 up to the layer before the third max pooling and then add the layers of our choice, how could we make this happen?
VGG16 architecture overview
You can to create a new model with say, base_model (VGG model with loaded weights and the unwanted layers 'pop()'ped). Then add VGG and other layers of your choice to the empty sequential model

Compatibility between keras and tf.keras models

I am interested in training a model in tf.keras and then loading it with keras. I know this is not highly-advised, but I am interested in using tf.keras to train the model because
tf.keras is easier to build input pipelines
I want to take advantage of the tf.dataset API
and I am interested in loading it with keras because
I want to use coreml to deploy the model to ios.
I want to use coremltools to convert my model to ios, and coreml tools only works with keras, not tf.keras.
I have run into a few road-blocks, because not all of the tf.keras layers can be loaded as keras layers. For instance, I've had no trouble with a simple DNN, since all of the Dense layer parameters are the same between tf.keras and keras. However, I have had trouble with RNN layers, because tf.keras has an argument time_major that keras does not have. My RNN layers have time_major=False, which is the same behavior as keras, but keras sequential layers do not have this argument.
My solution right now is to save the tf.keras model in a json file (for the model structure) and delete the parts of the layers that keras does not support, and also save an h5 file (for the weights), like so:
model = # model trained with tf.keras
# save json
model_json = model.to_json()
with open('path_to_model_json.json', 'w') as json_file:
json_ = json.loads(model_json)
layers = json_['config']['layers']
for layer in layers:
if layer['class_name'] == 'SimpleRNN':
del layer['config']['time_major']
json.dump(json_, json_file)
# save weights
model.save_weights('path_to_my_weights.h5')
Then, I use the coremlconverter tool to convert from keras to coreml, like so:
with CustomObjectScope({'GlorotUniform': glorot_uniform()}):
coreml_model = coremltools.converters.keras.convert(
model=('path_to_model_json','path_to_my_weights.h5'),
input_names=#inputs,
output_names=#outputs,
class_labels = #labels,
custom_conversion_functions = { "GlorotUniform": tf.keras.initializers.glorot_uniform
}
)
coreml_model.save('my_core_ml_model.mlmodel')
My solution appears to be working, but I am wondering if there is a better approach? Or, is there imminent danger in this approach? For instance, is there a better way to convert tf.keras models to coreml? Or is there a better way to convert tf.keras models to keras? Or is there a better approach that I haven't thought of?
Any advice on the matter would be greatly appreciated :)
Your approach seems good to me!
In the past, when I had to convert tf.keras model to keras model, I did following:
Train model in tf.keras
Save only the weights tf_model.save_weights("tf_model.hdf5")
Make Keras model architecture using all layers in keras (same as the tf keras one)
load weights by layer names in keras: keras_model.load_weights(by_name=True)
This seemed to work for me. Since, I was using out of box architecture (DenseNet169), I had to very less work to replicate tf.keras network to keras.

Pretrained model for Biomedical Image Segmentation

I would like to use pre-trained model (in encoder part) for Biomedical Image segmentation with Unet architecture.
My question is which pre-trained model should I use?
I tried VGG16 and VGG19 with the following different options but I could not get an improvement:
1- I froze all layers of both models and the rest of the layers are trainable
2- I froze first 16 layers then first 11 layers, the rest of the layers are trainable
3 - I made all layers trainable.

Fine-Tune pre-trained InceptionResnetV2

Steps for fine-tuning a network are as follow:
Add your custom network on top of an already trained base
network.
Freeze the base network.
Train the part you added.
Unfreeze some layers in the base network.
Jointly train both these layers and the part you added.
Now if the network architecture is simple as VGG16, we can simply unfreeze the base network from block5_conv1 (Conv2D) and re-train it.
VGG16 Architecture
But When the architecture is highly complex as InceptionResnetV2, where to start? Does anyone has any practical experience? Run the following code in python to see the model:
from keras.applications import InceptionResNetV2
conv_base = InceptionResNetV2(weights='imagenet',
include_top=False,
input_shape=(299, 299, 3))
conv_base.summary()
from keras.utils import plot_model
plot_model(conv_base, to_file='model.png')`
A very basic fine-tuning of model with InceptionResNetV2 will look like this:
from inception_resnet_v2 import InceptionResNetV2
# ImageNet classification
model = InceptionResNetV2()
model.predict(...)
# Finetuning on another 100-class dataset
base_model = InceptionResNetV2(include_top=False, pooling='avg')
# The first argument in the next line represents the number of classes
outputs = Dense(100, activation='softmax')(base_model.output)
model = Model(base_model.inputs, outputs)
model.compile(...)
model.fit(...)
This is a good place to start github.com/yuyang-huang/keras-inception-resnet-v2

Resources