keras "unknown loss function" error after defining custom loss function - keras

I defined a new loss function in keras in losses.py file. I close and relaunch anaconda prompt, but I got ValueError: ('Unknown loss function', ':binary_crossentropy_2'). I'm running keras using python2.7 and anaconda on windows 10.
I temporarily solve it by adding the loss function in the python file I compile my model.

In Keras we have to pass the custom functions in the load_model function:
def my_custom_func():
# your code
return
from keras.models import load_model
model = load_model('my_model.h5', custom_objects={'my_custom_func':
my_custom_func})

None of these solutions worked for me because I had two or more nested functions for multiple output variables.
my solution was to not compile when loading the model. I compile the model later on with the list of loss functions that were used when you trained the model.
from tensorflow.keras.models import load_model
# load model weights, but do not compile
model = load_model("mymodel.h5", compile=False)
# printing the model summary
model.summary()
# custom loss defined for feature 1
def function_loss_o1(weights)
N_c = len(weights)
def loss(y_true, y_pred):
output_loss = ...
return output_loss/N_c
return loss
# custom loss defined for feature 2
def function_loss_o2(weights)
N_c = len(weights)
def loss(y_true, y_pred):
output_loss = ...
return output_loss/N_c
return loss
# list of loss functions for each output feature
losses = [function_loss_o1, function_loss_o2]
# compile and train the model
model.compile(optimizer='adam', loss=losses, metrics=['accuracy'])
# now you can use compiled model to predict/evaluate, etc
eval_dict = {}
eval_dict["test_evaluate"] = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=0)

I didn't have luck with the above solutions, but I was able to do this:
from keras.models import load_model
from keras.utils.generic_utils import get_custom_objects
get_custom_objects().update({'my_custom_func': my_custom_func})
model = load_model('my_model.h5')
I found the solution here: https://github.com/keras-team/keras/issues/5916#issuecomment-294373616

It looks like you're trying to call the function via string alias, which requires more tampering with Keras' losses.py to map the string to the function (something you should not do as it gets overridden if you update the package). Instead just declare the function in your project and pass it to the loss parameter, for example:
from your.project import binary_crossentropy_2
# ...
model.fit(epochs, loss=binary_crossentropy_2)
As long as your function follows the satisfies the requirements here, it will work fine.

The solution was to add the function to the losses.py in keras within the environment's folder. At first, I added it in anaconda2/pkgs/keras.../losses.py, so that's why I got the error.
The path for losses.py in the environment is something like:
anaconda2/envs/envname/lib/python2.7/site-packages/keras/losses.py

Related

load_model with a metric

I save a model with a metric I defined as it is done here .
Were they do the following:
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
optimizer = keras.optimizers.Adam()
lr_metric = get_lr_metric(optimizer)
model.compile(
optimizer=optimizer,
metrics=['accuracy', lr_metric],
loss='mean_absolute_error',
)
It works great.
However, when I try to load this module:
keras.models.load_model(model_path, custom_objects = {'get_lr_metric': get_lr_metric})
I get:
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements get_configand from_config when saving. In addition, please use the custom_objects arg when calling load_model().
Trying the solution here:
def get_lr_metric(y_true, y_pred):
return 1
keras.models.load_model(model_path, custom_objects = {'get_lr_metric': get_lr_metric})
shows the same error message.
I use tensorflow 2.3.0 (keras 2.4.0) with Python 3.8 on Windows 10.
How should I load the model?

Intermediate Layer loss calculation for conditional Computation

I want to create an MLP based custom CNN model (multi-scaled) consists of several parallel small networks (capsules). These simple small networks are instantiated as a custom layer (conv2d->Flatten->Dense) for each convolution scale i.e. 3x3, 5x5. The purpose of these capsule networks is to generate intermediate loss consciousness to reduce overall global loss using the CNN model. I have written some sketchy codes but I'm not able to write the correct code for computing local loss using these capsules. Here's the code:
from tensorflow.keras import layers
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Layer
class capsule(tf.keras.layers.Layer):
def __init__(self):
super(capsule, self).__init__()
self.loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
self.Flatten = tf.keras.layers.Flatten()
self.conv2D = tf.keras.layers.Conv2D(3,3,(1,1),padding='same', activation='relu',name="LocalLoss3x3")
self.classifier = tf.keras.layers.Dense(10,activation='softmax', name='capsule3Output')
def call(self, inputs):
x=self.conv2D(inputs)
x=self.Flatten(x)
x=self.classifier(x)
pred=self(x_train)
loss=self.loss_fn(pred,y_train)
#self.add_loss(self.rate * tf.reduce_sum(tf.square(inputs)))
return loss, x
(x_train, y_train), (x_test, y_test)= mnist.load_data()
from tensorflow.keras import layers
class SparseMLP(tf.keras.models.Model):
def __init__(self, output_dim):
super(SparseMLP, self).__init__()
self.dense_1 = layers.Dense(1, activation=tf.nn.relu)
self.capsule = capsule()
self.dense_2 = layers.Dense(output_dim)
def call(self, inputs):
x = self.dense_1(inputs)
loss,x = self.capsule(inputs)
return self.dense_2(x)
mlp = SparseMLP(10)
#x_train=x_train.reshape(-1,28,28,1)
y = mlp(x_train)
To include a loss within a layer , you can use add_loss function of tf.keras.layers.Layer class. This fucntion takes a loss value and adds it up to the global loss function define in compile function.
you can call self.add_loss(loss_value) from inside the call method of a custom
layer.Losses added in this way get added to the "main" loss during training
(the one passed to compile()).
So to make ur model consider the losses from intermediate layer , you should uncomment the add_loss fn , and then train the model in usual way that you train.
Please mind that it is totally fine to not declare a "main" loss in the compile function as there already is a loss that ur defining in your layer class.
Note that when you pass losses via add_loss(), it becomes possible to call compile() without a loss function, since the model already has a loss to minimize.
Please note that call function of SparseMLP model , should look like this:
x = self.dense_1(inputs)
# i dunno if u desire to do this, that is pass inputs in capsule
# instead of x.Currently the output from dense_1 is not used at all .
# so keep in mind to make sure ur passing proper inputs to layers.
# and u do not have to call loss here as it will tracked internally by
# keras.
x = self.capsule(inputs)
return self.dense_2(x)
So running your model like below should do the trick:
model.compile(loss = "define ur main loss is there is" , metrics = "define ur metrics")
model.fit(x = train_inst , y = train_targets)

keras loss function(from keras input)

I reference the link: Keras custom loss function: Accessing current input pattern.
But I get error: " TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. "
This is the source code: What happened ?
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
X = np.random.rand(1000, 10)
y = np.random.rand(1000, 1)
model.train_on_batch(X, y)
In the tf 2.0, eager mode is on by default. It's not possible to get this functionality in eager mode as the above example is currently written. I think there are ways to do it in eager mode with some more advanced programming. But otherwise it's a simple matter to turn eager mode off and run in graph mode with:
from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()
When I received a similar error, I performed the following:
del model
Before:
model = Model(input_tensor, out)
It resolved my issue, you can give it a shot.
I am eager to know if it solves your problem :) .

Can't save custom subclassed model

Inspired by tf.keras.Model subclassing I created custom model.
I can train it and get successfull results, but I can't save it.
I use python3.6 with tensorflow v1.10 (or v1.9)
Minimal complete code example here:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
class Classifier(tf.keras.Model):
def __init__(self):
super().__init__(name="custom_model")
self.batch_norm1 = tf.layers.BatchNormalization()
self.conv1 = tf.layers.Conv2D(32, (7, 7))
self.pool1 = tf.layers.MaxPooling2D((2, 2), (2, 2))
self.batch_norm2 = tf.layers.BatchNormalization()
self.conv2 = tf.layers.Conv2D(64, (5, 5))
self.pool2 = tf.layers.MaxPooling2D((2, 2), (2, 2))
def call(self, inputs, training=None, mask=None):
x = self.batch_norm1(inputs)
x = self.conv1(x)
x = tf.nn.relu(x)
x = self.pool1(x)
x = self.batch_norm2(x)
x = self.conv2(x)
x = tf.nn.relu(x)
x = self.pool2(x)
return x
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(*x_train.shape, 1)[:1000]
y_train = y_train.reshape(*y_train.shape, 1)[:1000]
x_test = x_test.reshape(*x_test.shape, 1)
y_test = y_test.reshape(*y_test.shape, 1)
y_train = tf.keras.utils.to_categorical(y_train)
y_test = tf.keras.utils.to_categorical(y_test)
model = Classifier()
inputs = tf.keras.Input((28, 28, 1))
x = model(inputs)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(10, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, epochs=1, shuffle=True)
model.save("./my_model")
Error message:
1000/1000 [==============================] - 1s 1ms/step - loss: 4.6037 - acc: 0.7025
Traceback (most recent call last):
File "/home/user/Data/test/python/mnist/mnist_run.py", line 62, in <module>
model.save("./my_model")
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1278, in save
save_model(self, filepath, overwrite, include_optimizer)
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py", line 101, in save_model
'config': model.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1049, in get_config
layer_config = layer.get_config()
File "/home/user/miniconda3/envs/ml3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1028, in get_config
raise NotImplementedError
NotImplementedError
Process finished with exit code 1
I looked into the error line and found out that get_config method checks self._is_graph_network
Do anybody deal with this problem?
Thanks!
Update 1:
On the keras 2.2.2 (not tf.keras)
Found comment (for model saving)
file: keras/engine/network.py
Function: get_config
# Subclassed networks are not serializable
# (unless serialization is implemented by
# the author of the subclassed network).
So, obviously it won't work...
I wonder, why don't they point it out in the documentation (Like: "Use subclassing without ability to save!")
Update 2:
Found in keras documentation:
In subclassed models, the model's topology is defined as Python code
(rather than as a static graph of layers). That means the model's
topology cannot be inspected or serialized. As a result, the following
methods and attributes are not available for subclassed models:
model.inputs and model.outputs.
model.to_yaml() and model.to_json()
model.get_config() and model.save().
So, there is no way to save model by using subclassing.
It's possible to only use Model.save_weights()
TensorFlow 2.2
Thanks for #cal for noticing me that the new TensorFlow has supported saving the custom models!
By using model.save to save the whole model and by using load_model to restore previously stored subclassed model. The following code snippets describe how to implement them.
class ThreeLayerMLP(keras.Model):
def __init__(self, name=None):
super(ThreeLayerMLP, self).__init__(name=name)
self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
self.pred_layer = layers.Dense(10, name='predictions')
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
return self.pred_layer(x)
def get_model():
return ThreeLayerMLP(name='3_layer_mlp')
model = get_model()
# Save the model
model.save('path_to_my_model',save_format='tf')
# Recreate the exact same model purely from the file
new_model = keras.models.load_model('path_to_my_model')
See: Save and serialize models with Keras - Part II: Saving and Loading of Subclassed Models
TensorFlow 2.0
TL;DR:
do not use model.save() for custom subclass keras model;
use save_weights() and load_weights() instead.
With the help of the Tensorflow Team, it turns out the best practice of saving a Custom Sub-Class Keras Model is to save its weights and load it back when needed.
The reason that we can not simply save a Keras custom subclass model is that it contains custom codes, which can not be serialized safely. However, the weights can be saved/loaded when we have the same model structure and custom codes without any problem.
There has a great tutorial written by Francois Chollet who is the author of Keras, for how to save/load Sequential/Functional/Keras/Custom Sub-Class Models in Tensorflow 2.0 in Colab at here. In Saving Subclassed Models section, it said that:
Sequential models and Functional models are datastructures that represent a DAG of layers. As such, they can be safely serialized and deserialized.
A subclassed model differs in that it's not a datastructure, it's a
piece of code. The architecture of the model is defined via the body
of the call method. This means that the architecture of the model
cannot be safely serialized. To load a model, you'll need to have
access to the code that created it (the code of the model subclass).
Alternatively, you could be serializing this code as bytecode (e.g.
via pickling), but that's unsafe and generally not portable.
This will be fixed in an upcoming release according to the 1.13 pre-release patch notes:
Keras & Python API:
Subclassed Keras models can now be saved through tf.contrib.saved_model.save_keras_model.
EDIT:
It seems this is not quite as finished as the notes suggest. The docs for that function for v1.13 state:
Model limitations: - Sequential and functional models can always be saved. - Subclassed models can only be saved when serving_only=True. This is due to the current implementation copying the model in order to export the training and evaluation graphs. Because the topology of subclassed models cannot be determined, the subclassed models cannot be cloned. Subclassed models will be entirely exportable in the future.
Tensorflow 2.1 allows to save subclassed models with SavedModel format
From my beginning using Tensorflow, i was always a fan of Model Subclass, i feel this way of build models more pythonic and collaborative friendly. But saving the model was always a point of pain with this approach.
Recently i started to update my knowledge and reach to the following information that seems to be True for Tensorflow 2.1:
Subclassed Models
I found this
Second approach is by using model.save to save whole model and by
using load_model to restore previously stored subclassed model.
This last saves the model, the weight and other stuff into a SavedModel file
And by last the confirmation:
Saving custom objects:
If you are using the SavedModel format, you can
skip this section. The key difference between HDF5 and SavedModel is
that HDF5 uses object configs to save the model architecture, while
SavedModel saves the execution graph. Thus, SavedModels are able to
save custom objects like subclassed models and custom layers without
requiring the orginal code.
I tested this personally, and efectively, model.save() for subclassed models generate a SavedModel save. There is no more need for use model.save_weights() or related functions, they now are more for specific usecases.
This is suposed to be the end of this painful path for all of us interested in Model Subclassing.
I found a way to solve it. Create a new model and load the weights from the saved .h5 model. This way is not preferred, but it works with keras 2.2.4 and tensorflow 1.12.
class MyModel(keras.Model):
def __init__(self, inputs, *args, **kwargs):
outputs = func(inputs)
super(MyModel, self).__init__( inputs=inputs, outputs=outputs, *args, **kwargs)
def get_model():
return MyModel(inputs, *args, **kwargs)
model = get_model()
model.save(‘file_path.h5’)
model_new = get_model()
model_new.compile(optimizer=optimizer, loss=loss, metrics=metrics)
model_new.load_weights(‘file_path.h5’)
model_new.evaluate(x_test, y_test, **kwargs)
UPDATE: Jul 20
Recently I also tried to create my subclassed layers and model. Write your own get_config() function might be difficult. So I used model.save_weights(path_to_model_weights) and model.load_weights(path_to_model_weights). When you want to load the weights, remember to create the model with the same architecture than do model.load_weights(). See the tensorflow guide for more details.
Old Answer (Still correct)
Actually, tensorflow document said:
In order to save/load a model with custom-defined layers, or a subclassed model, you should overwrite the get_config and optionally from_config methods. Additionally, you should use register the custom object so that Keras is aware of it.
For example:
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
The output is:
{'name': 'linear_8', 'trainable': True, 'dtype': 'float32', 'units': 64}
You can play with this simple code. For example, in function "get_config()", remove config.update(), see what's going on. See this and this for more details. These are the Keras guide on tensorflow website.
use model.predict before tf.saved_model.save
Actually recreating the model with
keras.models.load_model('path_to_my_model')
didn't work for me
First we have to save_weights from the built model
model.save_weights('model_weights', save_format='tf')
Then
we have to initiate a new instance for the subclass Model then compile and train_on_batch with one record and load_weights of built model
loaded_model = ThreeLayerMLP(name='3_layer_mlp')
loaded_model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
loaded_model.train_on_batch(x_train[:1], y_train[:1])
loaded_model.load_weights('model_weights')
This work perfectly in TensorFlow==2.2.0

How can I load the weights only for some layers?

I want to take the weights of some layers - not all, as the architectures differ - from model_trained and initializes model_untrained with it. How can I do this with Keras?
If you have a function create_model() which returns a Keras model (example), you can initialize its weights like this:
from keras.models import load_model
model_untrained = create_model()
model_trained = load_model('trained_model.h5')
extracted_weights = model_trained.layers[0].get_weights()
model_untrained.layers[0].set_weights(extracted_weights)

Resources