load_model with a metric - keras

I save a model with a metric I defined as it is done here .
Were they do the following:
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
optimizer = keras.optimizers.Adam()
lr_metric = get_lr_metric(optimizer)
model.compile(
optimizer=optimizer,
metrics=['accuracy', lr_metric],
loss='mean_absolute_error',
)
It works great.
However, when I try to load this module:
keras.models.load_model(model_path, custom_objects = {'get_lr_metric': get_lr_metric})
I get:
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements get_configand from_config when saving. In addition, please use the custom_objects arg when calling load_model().
Trying the solution here:
def get_lr_metric(y_true, y_pred):
return 1
keras.models.load_model(model_path, custom_objects = {'get_lr_metric': get_lr_metric})
shows the same error message.
I use tensorflow 2.3.0 (keras 2.4.0) with Python 3.8 on Windows 10.
How should I load the model?

Related

tf.keras.callbacks.ModelCheckpoint Type Error : Unable to serialize 1.0000000656873453e-05 to JSON

I am creating my custom layers tf.keras model using mobile net pretrained layer. Model training is running fine but when saving the best picked model it is giving an error. Below is the snippet of the code that I used
pretrained_model = tf.keras.applications.MobileNetV2(
weights='imagenet',
include_top=False,
input_shape=[*IMAGE_SIZE, IMG_CHANNELS])
pretrained_model.trainable = True #fine tuning
model = tf.keras.Sequential([
tf.keras.layers.Lambda(# Convert image from int[0, 255] to the format expect by this model
lambda data:tf.keras.applications.mobilenet.preprocess_input(
tf.cast(data, tf.float32)), input_shape=[*IMAGE_SIZE, 3]),
pretrained_model,
tf.keras.layers.GlobalAveragePooling2D()])
model.add(tf.keras.layers.Dense(64, name='object_dense',kernel_regularizer=tf.keras.regularizers.l2(l2=0.001)))
model.add(tf.keras.layers.BatchNormalization(scale=False, center = False))
model.add(tf.keras.layers.Activation('relu', name='relu_dense_64'))
model.add(tf.keras.layers.Dropout(rate=0.2, name='dropout_dense_64'))
model.add(tf.keras.layers.Dense(32, name='object_dense_2',kernel_regularizer=tf.keras.regularizers.l2(l2=0.01)))
model.add(tf.keras.layers.BatchNormalization(scale=False, center = False))
model.add(tf.keras.layers.Activation('relu', name='relu_dense_32'))
model.add(tf.keras.layers.Dropout(rate=0.2, name='dropout_dense_32'))
model.add(tf.keras.layers.Dense(16, name='object_dense_16', kernel_regularizer=tf.keras.regularizers.l2(l2=0.01)))
model.add(tf.keras.layers.Dense(len(CLASS_NAMES), activation='softmax', name='object_prob'))
m1 = tf.keras.metrics.CategoricalAccuracy()
m2 = tf.keras.metrics.Recall()
m3 = tf.keras.metrics.Precision()
optimizers = [
tfa.optimizers.AdamW(learning_rate=lr * .001 , weight_decay=wd),
tfa.optimizers.AdamW(learning_rate=lr, weight_decay=wd)
]
optimizers_and_layers = [(optimizers[0], model.layers[0]), (optimizers[1], model.layers[1:])]
optimizer = tfa.optimizers.MultiOptimizer(optimizers_and_layers)
model.compile(
optimizer= optimizer,
loss = 'categorical_crossentropy',
metrics=[m1, m2, m3],
)
checkpoint_path = os.getcwd() + os.sep + 'keras_model'
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(checkpoint_path),
monitor = 'categorical_accuracy',
save_best_only=True,
save_weights_only=True)
history = model.fit(train_data, validation_data=test_data, epochs=N_EPOCHS, callbacks=[checkpoint_cb])
At tf.keras.callbacks.ModelCheckpoint is giving me an error
TypeError: Unable to serialize 1.0000000656873453e-05 to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
Below is the link to the Google Colab notebook in case you want to replicate the issue
https://colab.research.google.com/drive/1wQbUFfhtDaB5Xta574UkAXJtthui7Bt9?usp=sharing
This seems to be a bug in Tensorflow or Keras. The tensor that's being serialized to JSON is from your optimizer definition.
model.optimizer.optimizer_specs[0]["optimizer"].get_config()["weight_decay"]
<tf.Tensor: shape=(), dtype=float32, numpy=1.0000001e-05>
From the implementation of tfa.optimizers.AdamW, the weight_decay is serialized using tf.keras.optimizers.Adam._serialize_hyperparameter. This function assumes that if you pass in a callable for the hyperparameter, it returns a non-tensor value when called, but in your notebook, it was implemented as
wd = lambda: 1e-02 * schedule(step)
where schedule() returns a Tensor. I tried some various ways to try to convert the tensor to a scalar value, but I couldn't get them to work. As a workaround, I implemented wd as a LearningRateSchedule so it'll serialize properly, though the code was clunkier. Replacing the definitions of wd and lr with this code allowed model training to complete for me without any issues.
class MyExponentialDecay(tf.keras.optimizers.schedules.ExponentialDecay):
def __call__(self, step):
return 1e-2 * super().__call__(step)
wd = MyExponentialDecay(
initial_learning_rate,
decay_steps=14,
decay_rate=0.8,
staircase=True)
lr = 1e2 * schedule(step)
After training completes, the model.save() call will fail. I believe this is the same issue which was reported here in the Tensorflow Addons Github. The summary of this issue is that the get_config() function for the optimizers will include a "gv" key in the config which stores Tensor objects, which aren't JSON serializable.
At the time of writing, this issue has not been resolved yet. If you don't need the optimizer state for the final saved model, you can pass in the include_optimizer=False argument to model.save() which worked for me. Otherwise, you may need to patch the library or the specific optimizer class implementation to get rid of the "gw" key in the config like the OP did in that thread.

keras loss function(from keras input)

I reference the link: Keras custom loss function: Accessing current input pattern.
But I get error: " TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. "
This is the source code: What happened ?
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
X = np.random.rand(1000, 10)
y = np.random.rand(1000, 1)
model.train_on_batch(X, y)
In the tf 2.0, eager mode is on by default. It's not possible to get this functionality in eager mode as the above example is currently written. I think there are ways to do it in eager mode with some more advanced programming. But otherwise it's a simple matter to turn eager mode off and run in graph mode with:
from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()
When I received a similar error, I performed the following:
del model
Before:
model = Model(input_tensor, out)
It resolved my issue, you can give it a shot.
I am eager to know if it solves your problem :) .

Compile error on keras sequential model with custom loss function

Trying to compile CNN model of ~16K parameters on GPU in google colab for mnist dataset.
With standard loss 'categorical_crossentropy', it is working fine. But with custom_loss it is giving error.
lamda=0.01
m = X_train.shape[0]
def reg_loss(lamda):
model_layers = custom_model.layers # type list where each el is Conv2D obj etc.
reg_wts = 0
for idx, layer in enumerate(model_layers):
layer_wts = model_layers[idx].get_weights() # type list
if len(layer_wts) > 0: # activation, dropout layers do not have any weights
layer_wts = model_layers[idx].get_weights()[0] #ndarray, 3,3,1,16 : layer1 output
s = np.sum(layer_wts**2)
reg_wts += s
print(idx, "reg_wts", reg_wts)
return (lamda/(2*m))* reg_wts
reg_loss(lamda)
def custom_loss(y_true, y_pred):
K.categorical_crossentropy(y_true, y_pred) + reg_loss(lamda)
custom_model.compile(loss=custom_loss, optimizer='adam', metrics=['accuracy'])
reg_loss returns 28 reg_wts 224.11805880069733
1.8676504900058112e-05
On compile, gives error AttributeError: 'NoneType' object has no attribute 'get_shape'
custom_loss function did not have return statement. A silly mistake, but the error was quite misleading. Hence it took so much time.

Can't save custom subclassed model

Inspired by tf.keras.Model subclassing I created custom model.
I can train it and get successfull results, but I can't save it.
I use python3.6 with tensorflow v1.10 (or v1.9)
Minimal complete code example here:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
class Classifier(tf.keras.Model):
def __init__(self):
super().__init__(name="custom_model")
self.batch_norm1 = tf.layers.BatchNormalization()
self.conv1 = tf.layers.Conv2D(32, (7, 7))
self.pool1 = tf.layers.MaxPooling2D((2, 2), (2, 2))
self.batch_norm2 = tf.layers.BatchNormalization()
self.conv2 = tf.layers.Conv2D(64, (5, 5))
self.pool2 = tf.layers.MaxPooling2D((2, 2), (2, 2))
def call(self, inputs, training=None, mask=None):
x = self.batch_norm1(inputs)
x = self.conv1(x)
x = tf.nn.relu(x)
x = self.pool1(x)
x = self.batch_norm2(x)
x = self.conv2(x)
x = tf.nn.relu(x)
x = self.pool2(x)
return x
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(*x_train.shape, 1)[:1000]
y_train = y_train.reshape(*y_train.shape, 1)[:1000]
x_test = x_test.reshape(*x_test.shape, 1)
y_test = y_test.reshape(*y_test.shape, 1)
y_train = tf.keras.utils.to_categorical(y_train)
y_test = tf.keras.utils.to_categorical(y_test)
model = Classifier()
inputs = tf.keras.Input((28, 28, 1))
x = model(inputs)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(10, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, epochs=1, shuffle=True)
model.save("./my_model")
Error message:
1000/1000 [==============================] - 1s 1ms/step - loss: 4.6037 - acc: 0.7025
Traceback (most recent call last):
File "/home/user/Data/test/python/mnist/mnist_run.py", line 62, in <module>
model.save("./my_model")
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1278, in save
save_model(self, filepath, overwrite, include_optimizer)
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py", line 101, in save_model
'config': model.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1049, in get_config
layer_config = layer.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1028, in get_config
raise NotImplementedError
NotImplementedError
Process finished with exit code 1
I looked into the error line and found out that get_config method checks self._is_graph_network
Do anybody deal with this problem?
Thanks!
Update 1:
On the keras 2.2.2 (not tf.keras)
Found comment (for model saving)
file: keras/engine/network.py
Function: get_config
# Subclassed networks are not serializable
# (unless serialization is implemented by
# the author of the subclassed network).
So, obviously it won't work...
I wonder, why don't they point it out in the documentation (Like: "Use subclassing without ability to save!")
Update 2:
Found in keras documentation:
In subclassed models, the model's topology is defined as Python code
(rather than as a static graph of layers). That means the model's
topology cannot be inspected or serialized. As a result, the following
methods and attributes are not available for subclassed models:
model.inputs and model.outputs.
model.to_yaml() and model.to_json()
model.get_config() and model.save().
So, there is no way to save model by using subclassing.
It's possible to only use Model.save_weights()
TensorFlow 2.2
Thanks for #cal for noticing me that the new TensorFlow has supported saving the custom models!
By using model.save to save the whole model and by using load_model to restore previously stored subclassed model. The following code snippets describe how to implement them.
class ThreeLayerMLP(keras.Model):
def __init__(self, name=None):
super(ThreeLayerMLP, self).__init__(name=name)
self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
self.pred_layer = layers.Dense(10, name='predictions')
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
return self.pred_layer(x)
def get_model():
return ThreeLayerMLP(name='3_layer_mlp')
model = get_model()
# Save the model
model.save('path_to_my_model',save_format='tf')
# Recreate the exact same model purely from the file
new_model = keras.models.load_model('path_to_my_model')
See: Save and serialize models with Keras - Part II: Saving and Loading of Subclassed Models
TensorFlow 2.0
TL;DR:
do not use model.save() for custom subclass keras model;
use save_weights() and load_weights() instead.
With the help of the Tensorflow Team, it turns out the best practice of saving a Custom Sub-Class Keras Model is to save its weights and load it back when needed.
The reason that we can not simply save a Keras custom subclass model is that it contains custom codes, which can not be serialized safely. However, the weights can be saved/loaded when we have the same model structure and custom codes without any problem.
There has a great tutorial written by Francois Chollet who is the author of Keras, for how to save/load Sequential/Functional/Keras/Custom Sub-Class Models in Tensorflow 2.0 in Colab at here. In Saving Subclassed Models section, it said that:
Sequential models and Functional models are datastructures that represent a DAG of layers. As such, they can be safely serialized and deserialized.
A subclassed model differs in that it's not a datastructure, it's a
piece of code. The architecture of the model is defined via the body
of the call method. This means that the architecture of the model
cannot be safely serialized. To load a model, you'll need to have
access to the code that created it (the code of the model subclass).
Alternatively, you could be serializing this code as bytecode (e.g.
via pickling), but that's unsafe and generally not portable.
This will be fixed in an upcoming release according to the 1.13 pre-release patch notes:
Keras & Python API:
Subclassed Keras models can now be saved through tf.contrib.saved_model.save_keras_model.
EDIT:
It seems this is not quite as finished as the notes suggest. The docs for that function for v1.13 state:
Model limitations: - Sequential and functional models can always be saved. - Subclassed models can only be saved when serving_only=True. This is due to the current implementation copying the model in order to export the training and evaluation graphs. Because the topology of subclassed models cannot be determined, the subclassed models cannot be cloned. Subclassed models will be entirely exportable in the future.
Tensorflow 2.1 allows to save subclassed models with SavedModel format
From my beginning using Tensorflow, i was always a fan of Model Subclass, i feel this way of build models more pythonic and collaborative friendly. But saving the model was always a point of pain with this approach.
Recently i started to update my knowledge and reach to the following information that seems to be True for Tensorflow 2.1:
Subclassed Models
I found this
Second approach is by using model.save to save whole model and by
using load_model to restore previously stored subclassed model.
This last saves the model, the weight and other stuff into a SavedModel file
And by last the confirmation:
Saving custom objects:
If you are using the SavedModel format, you can
skip this section. The key difference between HDF5 and SavedModel is
that HDF5 uses object configs to save the model architecture, while
SavedModel saves the execution graph. Thus, SavedModels are able to
save custom objects like subclassed models and custom layers without
requiring the orginal code.
I tested this personally, and efectively, model.save() for subclassed models generate a SavedModel save. There is no more need for use model.save_weights() or related functions, they now are more for specific usecases.
This is suposed to be the end of this painful path for all of us interested in Model Subclassing.
I found a way to solve it. Create a new model and load the weights from the saved .h5 model. This way is not preferred, but it works with keras 2.2.4 and tensorflow 1.12.
class MyModel(keras.Model):
def __init__(self, inputs, *args, **kwargs):
outputs = func(inputs)
super(MyModel, self).__init__( inputs=inputs, outputs=outputs, *args, **kwargs)
def get_model():
return MyModel(inputs, *args, **kwargs)
model = get_model()
model.save(‘file_path.h5’)
model_new = get_model()
model_new.compile(optimizer=optimizer, loss=loss, metrics=metrics)
model_new.load_weights(‘file_path.h5’)
model_new.evaluate(x_test, y_test, **kwargs)
UPDATE: Jul 20
Recently I also tried to create my subclassed layers and model. Write your own get_config() function might be difficult. So I used model.save_weights(path_to_model_weights) and model.load_weights(path_to_model_weights). When you want to load the weights, remember to create the model with the same architecture than do model.load_weights(). See the tensorflow guide for more details.
Old Answer (Still correct)
Actually, tensorflow document said:
In order to save/load a model with custom-defined layers, or a subclassed model, you should overwrite the get_config and optionally from_config methods. Additionally, you should use register the custom object so that Keras is aware of it.
For example:
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
The output is:
{'name': 'linear_8', 'trainable': True, 'dtype': 'float32', 'units': 64}
You can play with this simple code. For example, in function "get_config()", remove config.update(), see what's going on. See this and this for more details. These are the Keras guide on tensorflow website.
use model.predict before tf.saved_model.save
Actually recreating the model with
keras.models.load_model('path_to_my_model')
didn't work for me
First we have to save_weights from the built model
model.save_weights('model_weights', save_format='tf')
Then
we have to initiate a new instance for the subclass Model then compile and train_on_batch with one record and load_weights of built model
loaded_model = ThreeLayerMLP(name='3_layer_mlp')
loaded_model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
loaded_model.train_on_batch(x_train[:1], y_train[:1])
loaded_model.load_weights('model_weights')
This work perfectly in TensorFlow==2.2.0

keras "unknown loss function" error after defining custom loss function

I defined a new loss function in keras in losses.py file. I close and relaunch anaconda prompt, but I got ValueError: ('Unknown loss function', ':binary_crossentropy_2'). I'm running keras using python2.7 and anaconda on windows 10.
I temporarily solve it by adding the loss function in the python file I compile my model.
In Keras we have to pass the custom functions in the load_model function:
def my_custom_func():
# your code
return
from keras.models import load_model
model = load_model('my_model.h5', custom_objects={'my_custom_func':
my_custom_func})
None of these solutions worked for me because I had two or more nested functions for multiple output variables.
my solution was to not compile when loading the model. I compile the model later on with the list of loss functions that were used when you trained the model.
from tensorflow.keras.models import load_model
# load model weights, but do not compile
model = load_model("mymodel.h5", compile=False)
# printing the model summary
model.summary()
# custom loss defined for feature 1
def function_loss_o1(weights)
N_c = len(weights)
def loss(y_true, y_pred):
output_loss = ...
return output_loss/N_c
return loss
# custom loss defined for feature 2
def function_loss_o2(weights)
N_c = len(weights)
def loss(y_true, y_pred):
output_loss = ...
return output_loss/N_c
return loss
# list of loss functions for each output feature
losses = [function_loss_o1, function_loss_o2]
# compile and train the model
model.compile(optimizer='adam', loss=losses, metrics=['accuracy'])
# now you can use compiled model to predict/evaluate, etc
eval_dict = {}
eval_dict["test_evaluate"] = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=0)
I didn't have luck with the above solutions, but I was able to do this:
from keras.models import load_model
from keras.utils.generic_utils import get_custom_objects
get_custom_objects().update({'my_custom_func': my_custom_func})
model = load_model('my_model.h5')
I found the solution here: https://github.com/keras-team/keras/issues/5916#issuecomment-294373616
It looks like you're trying to call the function via string alias, which requires more tampering with Keras' losses.py to map the string to the function (something you should not do as it gets overridden if you update the package). Instead just declare the function in your project and pass it to the loss parameter, for example:
from your.project import binary_crossentropy_2
# ...
model.fit(epochs, loss=binary_crossentropy_2)
As long as your function follows the satisfies the requirements here, it will work fine.
The solution was to add the function to the losses.py in keras within the environment's folder. At first, I added it in anaconda2/pkgs/keras.../losses.py, so that's why I got the error.
The path for losses.py in the environment is something like:
anaconda2/envs/envname/lib/python2.7/site-packages/keras/losses.py

Resources