Is there a possibility to visualize intermediate layers in Keras? - keras

I am using the DenseNet121 CNN in the Keras library and I would like to visualize the features maps when I predict images. I know that is possible with CNN we have made on our own.
Is it the same thing for models available in Keras like DenseNet?

Related

Pytorch image segmentation transfer learning

I am new in Pytorch. My question is: How do I apply transfer learning to a custom dataset? I am doing image segmentation on brain tumors. I can find examples which use U-net structure but I could not find examples using weights of the pre-trained models for a U-net image segmentation?
You could obtain pre-trained models in two ways:
Model weights or complete models shared in formats such .pt or .pth:
In this case, Saving and Loading Models is a good starting point. Copying from the tutorial there, you could load a model as
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
The other way is to load the model from torchvision. A list is available models is available at Torchvision Models. U-Net is not available yet. However, it is possible to load a pre-trained model as the encoder and write a separate decoder to form a U-Net with a pre-trained encoder.
In this case, the model object returned from the function calls shown in the API are already loaded with pretrained weights when pretrained=True.
For writing a custom dataloader, PyTorch data loaders may be a useful guide.

CNN with CTC loss

I want to extract features using a pretrained CNN model(ResNet50, VGG, etc) and use the features with a CTC loss function.
I want to build it as a text recognition model.
Anyone on how can i achieve this ?
I'm not sure if you are looking to finetune the pretrained models or to use the models for feature extraction. To do the latter freeze the petrained model weights (there are several ways to do this in PyTorch, the simplest being calling .eval() on the model), and feed the logits from the last layer of the model to your new output head. See the PyTorch tutorial here for a more in depth guide.

Compatibility between keras and tf.keras models

I am interested in training a model in tf.keras and then loading it with keras. I know this is not highly-advised, but I am interested in using tf.keras to train the model because
tf.keras is easier to build input pipelines
I want to take advantage of the tf.dataset API
and I am interested in loading it with keras because
I want to use coreml to deploy the model to ios.
I want to use coremltools to convert my model to ios, and coreml tools only works with keras, not tf.keras.
I have run into a few road-blocks, because not all of the tf.keras layers can be loaded as keras layers. For instance, I've had no trouble with a simple DNN, since all of the Dense layer parameters are the same between tf.keras and keras. However, I have had trouble with RNN layers, because tf.keras has an argument time_major that keras does not have. My RNN layers have time_major=False, which is the same behavior as keras, but keras sequential layers do not have this argument.
My solution right now is to save the tf.keras model in a json file (for the model structure) and delete the parts of the layers that keras does not support, and also save an h5 file (for the weights), like so:
model = # model trained with tf.keras
# save json
model_json = model.to_json()
with open('path_to_model_json.json', 'w') as json_file:
json_ = json.loads(model_json)
layers = json_['config']['layers']
for layer in layers:
if layer['class_name'] == 'SimpleRNN':
del layer['config']['time_major']
json.dump(json_, json_file)
# save weights
model.save_weights('path_to_my_weights.h5')
Then, I use the coremlconverter tool to convert from keras to coreml, like so:
with CustomObjectScope({'GlorotUniform': glorot_uniform()}):
coreml_model = coremltools.converters.keras.convert(
model=('path_to_model_json','path_to_my_weights.h5'),
input_names=#inputs,
output_names=#outputs,
class_labels = #labels,
custom_conversion_functions = { "GlorotUniform": tf.keras.initializers.glorot_uniform
}
)
coreml_model.save('my_core_ml_model.mlmodel')
My solution appears to be working, but I am wondering if there is a better approach? Or, is there imminent danger in this approach? For instance, is there a better way to convert tf.keras models to coreml? Or is there a better way to convert tf.keras models to keras? Or is there a better approach that I haven't thought of?
Any advice on the matter would be greatly appreciated :)
Your approach seems good to me!
In the past, when I had to convert tf.keras model to keras model, I did following:
Train model in tf.keras
Save only the weights tf_model.save_weights("tf_model.hdf5")
Make Keras model architecture using all layers in keras (same as the tf keras one)
load weights by layer names in keras: keras_model.load_weights(by_name=True)
This seemed to work for me. Since, I was using out of box architecture (DenseNet169), I had to very less work to replicate tf.keras network to keras.

Is there a way to create and train a model without transfer learning using tensorflow object-detection api?

I'm using faster_rcnn_resnet50 to train a model which will detect corrosions in images and I want to train a model from scratch instead of using transfer learning.
I don't know if this right but the reason I want to do this is that the already existing weights (which are trained on COCO) will affect my model trained on corrosion images.
One way I would like to do this is randomize or unfreeze the weights of the feature extractor on the resnet50 and then train the model on my images.
but there's no function or an option in the resnet50 config file to randomize or unfreeze weights.
I've made a new labelmap with a single label and tried it with transfer learning. It's working but I would like to have a model is trained just on my images and the previous weights shouldn't affect my predictions.
This is the first time I'm working with object detection and transfer learning. Will the weights of the pre-trained model on COCO affect my model which is trained on custom images of corrosion? How do you use tensorflow object-detection API without transfer learning?

Implement Gaussian Mixture Model using keras

I am trying to implement Gaussian Mixture Model using keras with tensorflow backend. Is there any guide or example on how to implement it?
Are you sure that it is what you want? you want to integrate a GMM into a neural network?
Tensorflow and Keras are libraries to create, train and use neural networks models. The Gaussian Mixture Model is not a neural network.

Resources