Keras loaded model input change - keras

Do you have any idea of an easy way to modify input image size of a saved model in Keras? For example the training input image size is 32x32, but in test I would like to input the full image 180x180. The model has been saved and at test loaded as the following:
json_file = open('autoencoder64a.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("autoencoder64a.h5")
Many thanks,
Tina

Is this a fully convolutional net? Otherwise you will not be able to reuse it with different input size, as this will change the number of weights in the non-convolutional layers.
If it is indeed a FCN, you only need to change the first and last line in the code defining the model:
input_layer = Input((180,180))
#All other layers copied here from your old model,
#ending with 'last_layer =...'
new_model = Model(input_layer, last_layer)

Related

Pytorch DataParallel model load with map_location

Model saved with
net= Net()
model= torch.nn.DataParallel(net)
############################
# Training
############################
torch.save(model,'./model_shear_pre.pkl')
Model loading with
net = Net()
model = torch.nn.DataParallel(net, device_ids=[0,1])
model = torch.load('./model_shear_finish.pkl', map_location={'cuda:0':'cuda:0', 'cuda:1':'cuda:0', 'cuda:2':'cuda:1', 'cuda:3':'cuda:1'})
The prob is that when training I used a machine with 4 GPU, after saving the model, I would like to test it on a new machine with only 2 GPU.
After loading the saved model, I expect the model's device_ids would be [0,1], but it still be [0,1,2,3] which is the old setting. Is there anything wrong when saving or loading?
You should save the weights instead of the whole model.
net = Net()
model = torch.nn.DataParallel(net)
############################
# Training
############################
torch.save(model.state_dict(),'./model_shear_pre.pkl')
Then load the weight in CPU before move to all GPU
net = Net()
weights = torch.load('./model_shear_finish.pkl', map_location='cpu')
net.load_state_dict(weights)
model = torch.nn.DataParallel(net, device_ids=[0,1])
But if you have an already trained model that is saved using the whole model instead of just weights this might also work
net = torch.load('./model_shear_finish.pkl', map_location='cpu')
model = torch.nn.DataParallel(net, device_ids=[0,1])
I still recommend save only weights though. Saving and loading the whole model can really screw you up because you have to import the model the exact same way both in the save and load. And a lot of time that's a tricky thing to do. Like
train.py
from nets import Net
net = Net()
torch.save(net, './model_shear_finish.pkl')
inference.py
# this won't work
import nets
torch.load('./model_shear_finish.pkl', map_location='cpu')
# this will work
from nets import Net
torch.load('./model_shear_finish.pkl', map_location='cpu')

Output predicted image Tensorflow Lite

I am trying to figure out how i can save a predicted mask (output) from a tensorflow model which have been converted to a tf.lite model on my PC. Any tips or ideas of how i can vizualise it or save the predicted mask as a .png image. I have tried using the tensorflow Lite interference from https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python without success.
Output now is as following:
[ 1 512 512 3]
[[[[9.7955531e-01 2.0444747e-02]
[9.9987805e-01 1.2197520e-04]
[9.9978799e-01 2.1196880e-04]
.......
.......
[9.9997246e-01 2.7536058e-05]
[9.9997437e-01 2.5645388e-05]
[1.9125430e-03 9.9808747e-01]]]]
Any help is greatly appriceated.
Many thanks
## Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="tflite_model.tflite")
print(interpreter.get_input_details())
print(interpreter.get_output_details())
print(interpreter.get_tensor_details())
interpreter.allocate_tensors()
## Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
## Test the model on input data.
input_shape = input_details[0]['shape']
print(input_shape)
## Use same image as Keras model
input_data = np.array(Xall, dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
## The function `get_tensor()` returns a copy of the tensor data.
## Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
output_data.shape
It depends on the meaning of your model output. Then, use image libraries like cv2 or PIL to draw the mask.
For example, the first row:
9.7955531e-01 2.0444747e-02
You need to figure out what they correspond to. With the limited information, it is hard to guess from the context.

Merging same vgg16 model but with different inputs

I am working on a classification problem in a project. The specificity of my problem is that I have to use two different type of data to manage it. My classes are Car, Pedestrian, Truck and Cyclist. My dataset is composed of :
-Images coming from the Camera : they are RGB image. Here is an example :
Images obtain by projecting Lidar Point Cloud (just 3D points) into 2D camera plane and encoding pixels using Depth & Reflectance. Here are examples :
I already manage to use both modalities in order to perform the classification task by using the Concatenate function of the keras API.
But what I would like to do is to use a more powerful CNN like VGG. I used pre-trained model and freeze all layers except the last 4. I read the grayscale image as RGB because the VGG16 pre-trained model need 3 channels input. Here is my code :
from keras.applications import VGG16
#Load the VGG model
#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers[:-4]:
layer.trainable = False
for layer in vgg_conv_C.layers[:-4]:
layer.trainable = False
mergedModel = Concatenate()([vgg_conv_C.output,vgg_conv_D.output])
mergedModel = Dense(units = 1024)(mergedModel)
mergedModel = BatchNormalization()(mergedModel)
mergedModel = Activation('relu')(mergedModel)
mergedModel = Dropout(0.5)(mergedModel)
mergedModel = Dense(units = 4,activation = 'softmax')(mergedModel)
fused_model = Model([vgg_conv_C.input, vgg_conv_D.input], mergedModel) )
The last line give the following error :
ValueError: The name "block1_conv1" is used 2 times in the model. All
layer names should be unique.
Did someone know how to handle this? To be simple, I just want to use VGG16 on both type of images, then just get the feature vectors for each modality, then Concatenate them and add fully connected layers at top to predict the image's class. It works with no-pre trained models. Can provide the code if needed
Try this
#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
for layer in vgg_conv_C.layers:
layer.name = layer.name + str('_C')
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers:
layer.name = layer.name + str('_D')
In this way, you'd still be able to use two identical pre-trained networks.
As mentioned in the error,
ValueError: The name "block1_conv1" is used 2 times in the model. All
layer names should be unique.
Therefore use Saimse network or If use dual CNN them remember in network layer ame are unique. its better and copy the network for second configuration and change the layers name.
IStackoverflowAndIKnowThings solution gives me the error:
AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only #property of the object. Please choose a different name.
The following worked for me (see this post):
..
for layer in vgg_conv_C.layers:
layer._name = layer._name + str('_C')
..

How to do transfer-learning on our own models?

I am trying to apply the transfer-learning on my CNN model, I am getting the below error.
model = model1(weights = "model1_weights", include_top=False)
—-
TypeError: __call__() takes exactly 2 arguments (1 given)
Thanks
If you are trying to use transfer-learning using custom model, the answer depends on the way you saved your model architecture(description) and weights.
1. If you saved the description and weights of the model on single .h5 file.
You can easily load model, using keras's load_model method.
from keras.models import load_model
model = load_model("model_path.h5")
2. If you saved the description and weights of the model on separate file (e.g in json and .h5 files respectively).
You can first load the model description from json file and then load model weights.
form keras.models import model_from_json
with open("path_to_json_file.json") as json_file:
model = model_from_json(json_file.read())
model.load_weights("path_to_weights_file.h5")
After the old model is loaded you can now decide which layers to discard(usually these layers are top fully connected layers) and which layers to freeze.
Let's assume you want to use the first five layers of the model without training again, the next three to be trained again, the last layers to be discarded(here it is assumed that the number of the network layers is greater than eight), and add three fully connected layer after the last layer. This can be done as follows.
Freeze the first five layers
for i in range(5):
model.layers[i].trainable = False
Make the next three layers trainable, this can be ignored if all layers are trainable.
for i in range(5,8):
model.layers[i].trainable = True
Add three more layers
ll = model.layers[8].output
ll = Dense(32)(ll)
ll = Dense(64)(ll)
ll = Dense(num_classes,activation="softmax")(ll)
new_model = Model(inputs=model.input,outputs=ll)

Modify layers in resnet model

I am trying to train resnet50 model for image classification problem. I have loaded the pretrained 'imagenet' weights before training the model on the dataset I have. I want to insert a layer (mean subtraction layer) in-between the input layer and the first convolutiuon layer.
model = ResNet50(weights='imagenet')
def mean_subtract(img):
img = T.set_subtensor(img[:,0,:,:],img[:,0,:,:] - 123.68)
img = T.set_subtensor(img[:,1,:,:],img[:,1,:,:] - 116.779)
img = T.set_subtensor(img[:,2,:,:],img[:,2,:,:] - 103.939)
return img / 255.0
I want to insert inputs = Lambda(mean_subtract, name='mean_subtraction')(inputs) next to the input layer and connect this to the first convolution layer of resnet model without losing the weights saved.
How do I do that?
Thanks!
Quick answer (Seems better than adding the function to the model)
Use the preprocessing function as described here: preprocessing images generated using keras function ImageDataGenerator() to train resnet50 model
Long answer
Since your function doesn't change shapes, you can put it in an outer model without changing the Resnet model (changing models may not be so simple, I always try to mount new models with parts from other models if needed).
resnet_model = ResNet50(weights='imagenet')
inputs = Input((None,None,3))
#it seems you're using (3,None,None) instead.
#choose based on your "data_format", which by default is channels_last
outputs = Lambda(mean_subtract,output_shape=not_necessary_with_tensorflow)(inputs)
outputs = resnet_model(outputs)
model = Model(inputs, outputs)

Resources