How to restore KerasClassfier? - keras

After saved weights and json configuration of a KerasClassifier model https://github.com/keras-team/keras/blob/master/keras/wrappers/scikit_learn.py I need to restore it and verify results.
But if I restore weight and model then I have a Sequential object, how can I rebuild original KerasClassifier from that??

I'm not sure I understood you correcly, but propose following solution. KerasClassifier inherits from BaseWrapper which has the following __init__ signature:
def __init__(self, build_fn=None, **sk_params):
self.build_fn = build_fn
self.sk_params = sk_params
self.check_params(sk_params)
okay, what's the build_fn and sk_params?
The build_fn should construct, compile and return a Keras model, which
will then be used to fit/predict. One of the following
three values could be passed to build_fn:
1. A function
2. An instance of a class that implements the __call__ method
3. None. This means you implement a class that inherits from either
KerasClassifier or KerasRegressor. The __call__ method of the
present class will then be treated as the default build_fn.
...
sk_params takes both model parameters and fitting parameters. Legal model
parameters are the arguments of build_fn. Note that like all other
estimators in scikit-learn, build_fn should provide default values for
its arguments, so that you could create the estimator without passing any
values to sk_params.
...
some commints are omitted
you can read full comment at this and this links.
As the build_fn expects the function which returns compiled keras model (no matter what it is - Sequential or just Model) - you can pass as value function which returns loaded model.
Edit also you should call fit with some params to restore model using that approach.
load model as build_fn
fit method invokes a build_fn, hence each time you try to train such classifier you will load and then fit loaded clssifier.
For example:
from keras.models import load_model # or another method - but this one is simpliest
from keras.wrappers.scikit_learn import KerasClassifier
def load_model(*args, **kwargs):
"""probably this function expects sk_params, so you can use it in theory"""
path="my_model.hd5"
model = load_model(path)
return model
keras_classifier = KerasClassifier(load_model, sk_params) # use your sk_params
keras_classifier.fit(X_tr, y_tr) # I use slice (1, input_shape) to train
- it will work, as the loaded model almost trained & compiled. But it gives a small shift for your model even if you'll call it with a batch of size 1 and for 1 epoch.
load via build_fn closure
Also you can load the model first (if you wish to provide path easily and it's unnacceptable to hardcode path), then return a function which is "build_fn - acceptable":
def load_model_return_build_fn(path):
model = load_model(path)
def build_fn(*args, **kwars):
"""probably this function expects sk_params"""
return model # defined above
return build_fn
build_fn = load_model_return_build_fn("model.hd5")
keras_classifier = KerasClassifier(build_fn, sk_params) # use your sk_params
keras_classifier.fit(X_tr, y_tr) # I use slice (1, input_shape) to train
assign a model to it's attribute
If you plan just load and use pre-trained model, you can use any to load it, assign to the model attribute and don't call fit.
build_fn = load_model_return_build_fn("model.hd5")
# or the function which realy builds and fits a model
keras_classifier = KerasClassifier(build_fn, sk_params) # use your sk_params
keras_classifier.model = model # assign model here, don't call fit
- that case you set model explicitly to it's attribute. Note that build_fn should be a coorrect one build_fn - otherwise it doesn't pass the self.check_params(sk_params) test.
Inherit from KerasClassifier (not so easy as I've thought)
After all, the best solution I know is inherit from KerasClassifier and add a load and/or from_file method.
class KerasClassifierLoadable(KerasClassifier):
#classmethod
def from_file(cls, path, *args, **kwargs):
keras_classifier = cls(*args, **kwargs)
keras_classifier.model = load_model(path)
outp_shape = keras_classifier.model.layers[-1].output_shape[-1]
if outp_shape > 1:
keras_classifier.classes_ = np.arange(outp_shape, dtype='int32')
else:
raise ValueError("Inconsistent output shape: outp_shape={}".format(outp_shape))
keras_classifier.n_classes_ = len(keras_classifier.classes_)
return keras_classifier
def load(self, path):
self.model = load_model(path)
outp_shape = keras_classifier.model.layers[-1].output_shape[-1]
if outp_shape > 1:
keras_classifier.classes_ = np.arange(outp_shape, dtype='int32')
else:
raise ValueError("Inconsistent output shape: outp_shape={}".format(outp_shape))
self.n_classes_ = len(self.classes_)
here we shoul set self.classes_ to the correct class labels - but I use just an integer values from `range(0, n_classes).
Usage (the build_fn can be any appropiate build_fn):
keras_classifier = KerasClassifierLoadable.from_file("model.hd5", build_fn=build_fn)
keras_classifier = KerasClassifierLoadable(build_fn=build_fn)
keras_classifier.load("model.hd5")

If you have two files model.json and weights.h5, then you can easily load the model and use it as you want.
from keras.models import model_from_json
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights("model.h5")
# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
score = loaded_model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))

Related

How keras.utils.Sequence works?

I am trying to create a data pipeline for U-net for Image Segmentation. I came across Keras.utils.Sequence class through which, I can create a data pipeline, But I am unable to understand how this is working.
link for the code Keras code , Source code
def __iter__(self):
"""Create a generator that iterate over the Sequence."""
for item in (self[i] for i in range(len(self))):
yield item
I will highly appreciate if anyone can tell me how this works ?
You don't need a generator. The sequence class is there to manage that. You need to define a class inherited from tensorflow.keras.utils.Sequence and define the methods:
__init__, __getitem__, __len__. In addition, you can define the method on_epoch_end, which is called at the end of each epoch and is usually used to shuffle the sample indexes.
There is an example in the link you gave Tensorflow Sequence.
Below is another example of Sequence.
Note that you can pass the data to the __init__ constructor, but you may as well read the data from files in the __getitem__ method, assuming you know where to read it, e.g. by passing the name of a directory or directories into the constructor. This is necessary if there is a lot of data.
from tensorflow import keras
import numpy as np
class SequenceExample(keras.utils.Sequence):
def __init__(self, x_in, y_in, batch_size, shuffle=True):
# Initialization
self.batch_size = batch_size
self.shuffle = shuffle
self.x = x_in
self.y = y_in
self.datalen = len(y_in)
self.indexes = np.arange(self.datalen)
if self.shuffle:
np.random.shuffle(self.indexes)
def __getitem__(self, index):
# get batch indexes from shuffled indexes
batch_indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
x_batch = self.x[batch_indexes]
y_batch = self.y[batch_indexes]
return x_batch, y_batch
def __len__(self):
# Denotes the number of batches per epoch
return self.datalen // self.batch_size
def on_epoch_end(self):
# Updates indexes after each epoch
self.indexes = np.arange(self.datalen)
if self.shuffle:
np.random.shuffle(self.indexes)

Intermediate Layer loss calculation for conditional Computation

I want to create an MLP based custom CNN model (multi-scaled) consists of several parallel small networks (capsules). These simple small networks are instantiated as a custom layer (conv2d->Flatten->Dense) for each convolution scale i.e. 3x3, 5x5. The purpose of these capsule networks is to generate intermediate loss consciousness to reduce overall global loss using the CNN model. I have written some sketchy codes but I'm not able to write the correct code for computing local loss using these capsules. Here's the code:
from tensorflow.keras import layers
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Layer
class capsule(tf.keras.layers.Layer):
def __init__(self):
super(capsule, self).__init__()
self.loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
self.Flatten = tf.keras.layers.Flatten()
self.conv2D = tf.keras.layers.Conv2D(3,3,(1,1),padding='same', activation='relu',name="LocalLoss3x3")
self.classifier = tf.keras.layers.Dense(10,activation='softmax', name='capsule3Output')
def call(self, inputs):
x=self.conv2D(inputs)
x=self.Flatten(x)
x=self.classifier(x)
pred=self(x_train)
loss=self.loss_fn(pred,y_train)
#self.add_loss(self.rate * tf.reduce_sum(tf.square(inputs)))
return loss, x
(x_train, y_train), (x_test, y_test)= mnist.load_data()
from tensorflow.keras import layers
class SparseMLP(tf.keras.models.Model):
def __init__(self, output_dim):
super(SparseMLP, self).__init__()
self.dense_1 = layers.Dense(1, activation=tf.nn.relu)
self.capsule = capsule()
self.dense_2 = layers.Dense(output_dim)
def call(self, inputs):
x = self.dense_1(inputs)
loss,x = self.capsule(inputs)
return self.dense_2(x)
mlp = SparseMLP(10)
#x_train=x_train.reshape(-1,28,28,1)
y = mlp(x_train)
To include a loss within a layer , you can use add_loss function of tf.keras.layers.Layer class. This fucntion takes a loss value and adds it up to the global loss function define in compile function.
you can call self.add_loss(loss_value) from inside the call method of a custom
layer.Losses added in this way get added to the "main" loss during training
(the one passed to compile()).
So to make ur model consider the losses from intermediate layer , you should uncomment the add_loss fn , and then train the model in usual way that you train.
Please mind that it is totally fine to not declare a "main" loss in the compile function as there already is a loss that ur defining in your layer class.
Note that when you pass losses via add_loss(), it becomes possible to call compile() without a loss function, since the model already has a loss to minimize.
Please note that call function of SparseMLP model , should look like this:
x = self.dense_1(inputs)
# i dunno if u desire to do this, that is pass inputs in capsule
# instead of x.Currently the output from dense_1 is not used at all .
# so keep in mind to make sure ur passing proper inputs to layers.
# and u do not have to call loss here as it will tracked internally by
# keras.
x = self.capsule(inputs)
return self.dense_2(x)
So running your model like below should do the trick:
model.compile(loss = "define ur main loss is there is" , metrics = "define ur metrics")
model.fit(x = train_inst , y = train_targets)

Adding parameters to a pretrained model

In Pytorch, we load the pretrained model as follows:
net.load_state_dict(torch.load(path)['model_state_dict'])
Then the network structure and the loaded model have to be exactly the same. However, is it possible to load the weights but then modify the network/add an extra parameter?
Note:
If we add an extra parameter to the model earlier before loading the weights, e.g.
self.parameter = Parameter(torch.ones(5),requires_grad=True)
we will get Missing key(s) in state_dict: error when loading the weights.
Let's create a model and save its' state.
class Model1(nn.Module):
def __init__(self):
super(Model1, self).__init__()
self.encoder = nn.LSTM(100, 50)
def forward(self):
pass
model1 = Model1()
torch.save(model1.state_dict(), 'filename.pt') # saving model
Then create a second model which has a few layers common to the first model. Load the states of the first model and load it to the common layers of the second model.
class Model2(nn.Module):
def __init__(self):
super(Model2, self).__init__()
self.encoder = nn.LSTM(100, 50)
self.linear = nn.Linear(50, 200)
def forward(self):
pass
model1_dict = torch.load('filename.pt')
model2 = Model2()
model2_dict = model2.state_dict()
# 1. filter out unnecessary keys
filtered_dict = {k: v for k, v in model1_dict.items() if k in model2_dict}
# 2. overwrite entries in the existing state dict
model2_dict.update(filtered_dict)
# 3. load the new state dict
model2.load_state_dict(model2_dict)

How to save best model in Keras based on AUC metric?

I would like to save the best model in Keras based on auc and I have this code:
def MyMetric(yTrue, yPred):
auc = tf.metrics.auc(yTrue, yPred)
return auc
best_model = [ModelCheckpoint(filepath='best_model.h5', monitor='MyMetric', save_best_only=True)]
train_history = model.fit([train_x],
[train_y], batch_size=batch_size, epochs=epochs, validation_split=0.05,
callbacks=best_model, verbose = 2)
SO my model runs nut I get this warning:
RuntimeWarning: Can save best model only with MyMetric available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
It would be great if any can tell me this is the right way to do it and if not what should I do?
You have to pass the Metric you want to monitor to model.compile.
https://keras.io/metrics/#custom-metrics
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[MyMetric])
Also, tf.metrics.auc returns a tuple containing the tensor and update_op. Keras expects the custom metric function to return only a tensor.
def MyMetric(yTrue, yPred):
import tensorflow as tf
auc = tf.metrics.auc(yTrue, yPred)
return auc[0]
After this step, you will get errors about uninitialized values. Please see these threads:
https://github.com/keras-team/keras/issues/3230
How to compute Receiving Operating Characteristic (ROC) and AUC in keras?
You can define a custom metric that calls tensorflow to compute AUROC in the following way:
def as_keras_metric(method):
import functools
from keras import backend as K
import tensorflow as tf
#functools.wraps(method)
def wrapper(self, args, **kwargs):
""" Wrapper for turning tensorflow metrics into keras metrics """
value, update_op = method(self, args, **kwargs)
K.get_session().run(tf.local_variables_initializer())
with tf.control_dependencies([update_op]):
value = tf.identity(value)
return value
return wrapper
#as_keras_metric
def AUROC(y_true, y_pred, curve='ROC'):
return tf.metrics.auc(y_true, y_pred, curve=curve)
You then need to compile your model with this metric:
model.compile(loss=train_loss, optimizer='adam', metrics=['accuracy',AUROC])
Finally: Checkpoint the model in the following way:
model_checkpoint = keras.callbacks.ModelCheckpoint(path_to_save_model, monitor='val_AUROC',
verbose=0, save_best_only=True,
save_weights_only=False, mode='auto', period=1)
Be careful though: I believe the Validation AUROC is calculated batch wise and averaged; so might give some errors with checkpointing. A good idea might be to verify after model training finishes that the AUROC of the predictions of the trained model (computed with sklearn.metrics) matches what Tensorflow reports while training and checkpointing
Assuming you use TensorBoard, then you have a historical record—in the form of tfevents files—of all your metric calculations, for all your epochs; then a tf.keras.callbacks.Callback is what you want.
I use tf.keras.callbacks.ModelCheckpoint with save_freq: 'epoch' to save—as an h5 file or tf file—the weights for each epoch.
To avoid filling the hard-drive with model files, write a new Callback—or extend the ModelCheckpoint class's—on_epoch_end implementation:
def on_epoch_end(self, epoch, logs=None):
super(DropWorseModels, self).on_epoch_end(epoch, logs)
if epoch < self._keep_best:
return
model_files = frozenset(
filter(lambda filename: path.splitext(filename)[1] == SAVE_FORMAT_WITH_SEP,
listdir(self._model_dir)))
if len(model_files) < self._keep_best:
return
tf_events_logs = tuple(islice(log_parser(tfevents=path.join(self._log_dir,
self._split),
tag=self.monitor),
0,
self._keep_best))
keep_models = frozenset(map(self._filename.format,
map(itemgetter(0), tf_events_logs)))
if len(keep_models) < self._keep_best:
return
it_consumes(map(lambda filename: remove(path.join(self._model_dir, filename)),
model_files - keep_models))
Appendix (imports and utility function implementations):
from itertools import islice
from operator import itemgetter
from os import path, listdir, remove
from collections import deque
import tensorflow as tf
from tensorflow.core.util import event_pb2
def log_parser(tfevents, tag):
values = []
for record in tf.data.TFRecordDataset(tfevents):
event = event_pb2.Event.FromString(tf.get_static_value(record))
if event.HasField('summary'):
value = event.summary.value.pop(0)
if value.tag == tag:
values.append(value.simple_value)
return tuple(sorted(enumerate(values), key=itemgetter(1), reverse=True))
it_consumes = lambda it, n=None: deque(it, maxlen=0) if n is None \
else next(islice(it, n, n), None)
SAVE_FORMAT = 'h5'
SAVE_FORMAT_WITH_SEP = '{}{}'.format(path.extsep, SAVE_FORMAT)
For completeness, the rest of the class:
class DropWorseModels(tf.keras.callbacks.Callback):
"""
Designed around making `save_best_only` work for arbitrary metrics
and thresholds between metrics
"""
def __init__(self, model_dir, monitor, log_dir, keep_best=2, split='validation'):
"""
Args:
model_dir: directory to save weights. Files will have format
'{model_dir}/{epoch:04d}.h5'.
split: dataset split to analyse, e.g., one of 'train', 'test', 'validation'
monitor: quantity to monitor.
log_dir: the path of the directory where to save the log files to be
parsed by TensorBoard.
keep_best: number of models to keep, sorted by monitor value
"""
super(DropWorseModels, self).__init__()
self._model_dir = model_dir
self._split = split
self._filename = 'model-{:04d}' + SAVE_FORMAT_WITH_SEP
self._log_dir = log_dir
self._keep_best = keep_best
self.monitor = monitor
This has the added advantage of being able to save and delete multiple model files in a single Callback. You can easily extend with different thresholding support, e.g., to keep all model files with an AUC in threshold OR TP, FP, TN, FN within threshold.

Can't save custom subclassed model

Inspired by tf.keras.Model subclassing I created custom model.
I can train it and get successfull results, but I can't save it.
I use python3.6 with tensorflow v1.10 (or v1.9)
Minimal complete code example here:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
class Classifier(tf.keras.Model):
def __init__(self):
super().__init__(name="custom_model")
self.batch_norm1 = tf.layers.BatchNormalization()
self.conv1 = tf.layers.Conv2D(32, (7, 7))
self.pool1 = tf.layers.MaxPooling2D((2, 2), (2, 2))
self.batch_norm2 = tf.layers.BatchNormalization()
self.conv2 = tf.layers.Conv2D(64, (5, 5))
self.pool2 = tf.layers.MaxPooling2D((2, 2), (2, 2))
def call(self, inputs, training=None, mask=None):
x = self.batch_norm1(inputs)
x = self.conv1(x)
x = tf.nn.relu(x)
x = self.pool1(x)
x = self.batch_norm2(x)
x = self.conv2(x)
x = tf.nn.relu(x)
x = self.pool2(x)
return x
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(*x_train.shape, 1)[:1000]
y_train = y_train.reshape(*y_train.shape, 1)[:1000]
x_test = x_test.reshape(*x_test.shape, 1)
y_test = y_test.reshape(*y_test.shape, 1)
y_train = tf.keras.utils.to_categorical(y_train)
y_test = tf.keras.utils.to_categorical(y_test)
model = Classifier()
inputs = tf.keras.Input((28, 28, 1))
x = model(inputs)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(10, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, epochs=1, shuffle=True)
model.save("./my_model")
Error message:
1000/1000 [==============================] - 1s 1ms/step - loss: 4.6037 - acc: 0.7025
Traceback (most recent call last):
File "/home/user/Data/test/python/mnist/mnist_run.py", line 62, in <module>
model.save("./my_model")
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1278, in save
save_model(self, filepath, overwrite, include_optimizer)
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py", line 101, in save_model
'config': model.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1049, in get_config
layer_config = layer.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1028, in get_config
raise NotImplementedError
NotImplementedError
Process finished with exit code 1
I looked into the error line and found out that get_config method checks self._is_graph_network
Do anybody deal with this problem?
Thanks!
Update 1:
On the keras 2.2.2 (not tf.keras)
Found comment (for model saving)
file: keras/engine/network.py
Function: get_config
# Subclassed networks are not serializable
# (unless serialization is implemented by
# the author of the subclassed network).
So, obviously it won't work...
I wonder, why don't they point it out in the documentation (Like: "Use subclassing without ability to save!")
Update 2:
Found in keras documentation:
In subclassed models, the model's topology is defined as Python code
(rather than as a static graph of layers). That means the model's
topology cannot be inspected or serialized. As a result, the following
methods and attributes are not available for subclassed models:
model.inputs and model.outputs.
model.to_yaml() and model.to_json()
model.get_config() and model.save().
So, there is no way to save model by using subclassing.
It's possible to only use Model.save_weights()
TensorFlow 2.2
Thanks for #cal for noticing me that the new TensorFlow has supported saving the custom models!
By using model.save to save the whole model and by using load_model to restore previously stored subclassed model. The following code snippets describe how to implement them.
class ThreeLayerMLP(keras.Model):
def __init__(self, name=None):
super(ThreeLayerMLP, self).__init__(name=name)
self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
self.pred_layer = layers.Dense(10, name='predictions')
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
return self.pred_layer(x)
def get_model():
return ThreeLayerMLP(name='3_layer_mlp')
model = get_model()
# Save the model
model.save('path_to_my_model',save_format='tf')
# Recreate the exact same model purely from the file
new_model = keras.models.load_model('path_to_my_model')
See: Save and serialize models with Keras - Part II: Saving and Loading of Subclassed Models
TensorFlow 2.0
TL;DR:
do not use model.save() for custom subclass keras model;
use save_weights() and load_weights() instead.
With the help of the Tensorflow Team, it turns out the best practice of saving a Custom Sub-Class Keras Model is to save its weights and load it back when needed.
The reason that we can not simply save a Keras custom subclass model is that it contains custom codes, which can not be serialized safely. However, the weights can be saved/loaded when we have the same model structure and custom codes without any problem.
There has a great tutorial written by Francois Chollet who is the author of Keras, for how to save/load Sequential/Functional/Keras/Custom Sub-Class Models in Tensorflow 2.0 in Colab at here. In Saving Subclassed Models section, it said that:
Sequential models and Functional models are datastructures that represent a DAG of layers. As such, they can be safely serialized and deserialized.
A subclassed model differs in that it's not a datastructure, it's a
piece of code. The architecture of the model is defined via the body
of the call method. This means that the architecture of the model
cannot be safely serialized. To load a model, you'll need to have
access to the code that created it (the code of the model subclass).
Alternatively, you could be serializing this code as bytecode (e.g.
via pickling), but that's unsafe and generally not portable.
This will be fixed in an upcoming release according to the 1.13 pre-release patch notes:
Keras & Python API:
Subclassed Keras models can now be saved through tf.contrib.saved_model.save_keras_model.
EDIT:
It seems this is not quite as finished as the notes suggest. The docs for that function for v1.13 state:
Model limitations: - Sequential and functional models can always be saved. - Subclassed models can only be saved when serving_only=True. This is due to the current implementation copying the model in order to export the training and evaluation graphs. Because the topology of subclassed models cannot be determined, the subclassed models cannot be cloned. Subclassed models will be entirely exportable in the future.
Tensorflow 2.1 allows to save subclassed models with SavedModel format
From my beginning using Tensorflow, i was always a fan of Model Subclass, i feel this way of build models more pythonic and collaborative friendly. But saving the model was always a point of pain with this approach.
Recently i started to update my knowledge and reach to the following information that seems to be True for Tensorflow 2.1:
Subclassed Models
I found this
Second approach is by using model.save to save whole model and by
using load_model to restore previously stored subclassed model.
This last saves the model, the weight and other stuff into a SavedModel file
And by last the confirmation:
Saving custom objects:
If you are using the SavedModel format, you can
skip this section. The key difference between HDF5 and SavedModel is
that HDF5 uses object configs to save the model architecture, while
SavedModel saves the execution graph. Thus, SavedModels are able to
save custom objects like subclassed models and custom layers without
requiring the orginal code.
I tested this personally, and efectively, model.save() for subclassed models generate a SavedModel save. There is no more need for use model.save_weights() or related functions, they now are more for specific usecases.
This is suposed to be the end of this painful path for all of us interested in Model Subclassing.
I found a way to solve it. Create a new model and load the weights from the saved .h5 model. This way is not preferred, but it works with keras 2.2.4 and tensorflow 1.12.
class MyModel(keras.Model):
def __init__(self, inputs, *args, **kwargs):
outputs = func(inputs)
super(MyModel, self).__init__( inputs=inputs, outputs=outputs, *args, **kwargs)
def get_model():
return MyModel(inputs, *args, **kwargs)
model = get_model()
model.save(‘file_path.h5’)
model_new = get_model()
model_new.compile(optimizer=optimizer, loss=loss, metrics=metrics)
model_new.load_weights(‘file_path.h5’)
model_new.evaluate(x_test, y_test, **kwargs)
UPDATE: Jul 20
Recently I also tried to create my subclassed layers and model. Write your own get_config() function might be difficult. So I used model.save_weights(path_to_model_weights) and model.load_weights(path_to_model_weights). When you want to load the weights, remember to create the model with the same architecture than do model.load_weights(). See the tensorflow guide for more details.
Old Answer (Still correct)
Actually, tensorflow document said:
In order to save/load a model with custom-defined layers, or a subclassed model, you should overwrite the get_config and optionally from_config methods. Additionally, you should use register the custom object so that Keras is aware of it.
For example:
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
The output is:
{'name': 'linear_8', 'trainable': True, 'dtype': 'float32', 'units': 64}
You can play with this simple code. For example, in function "get_config()", remove config.update(), see what's going on. See this and this for more details. These are the Keras guide on tensorflow website.
use model.predict before tf.saved_model.save
Actually recreating the model with
keras.models.load_model('path_to_my_model')
didn't work for me
First we have to save_weights from the built model
model.save_weights('model_weights', save_format='tf')
Then
we have to initiate a new instance for the subclass Model then compile and train_on_batch with one record and load_weights of built model
loaded_model = ThreeLayerMLP(name='3_layer_mlp')
loaded_model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
loaded_model.train_on_batch(x_train[:1], y_train[:1])
loaded_model.load_weights('model_weights')
This work perfectly in TensorFlow==2.2.0

Resources