Intermediate Layer loss calculation for conditional Computation - python-3.x

I want to create an MLP based custom CNN model (multi-scaled) consists of several parallel small networks (capsules). These simple small networks are instantiated as a custom layer (conv2d->Flatten->Dense) for each convolution scale i.e. 3x3, 5x5. The purpose of these capsule networks is to generate intermediate loss consciousness to reduce overall global loss using the CNN model. I have written some sketchy codes but I'm not able to write the correct code for computing local loss using these capsules. Here's the code:
from tensorflow.keras import layers
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Layer
class capsule(tf.keras.layers.Layer):
def __init__(self):
super(capsule, self).__init__()
self.loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
self.Flatten = tf.keras.layers.Flatten()
self.conv2D = tf.keras.layers.Conv2D(3,3,(1,1),padding='same', activation='relu',name="LocalLoss3x3")
self.classifier = tf.keras.layers.Dense(10,activation='softmax', name='capsule3Output')
def call(self, inputs):
x=self.conv2D(inputs)
x=self.Flatten(x)
x=self.classifier(x)
pred=self(x_train)
loss=self.loss_fn(pred,y_train)
#self.add_loss(self.rate * tf.reduce_sum(tf.square(inputs)))
return loss, x
(x_train, y_train), (x_test, y_test)= mnist.load_data()
from tensorflow.keras import layers
class SparseMLP(tf.keras.models.Model):
def __init__(self, output_dim):
super(SparseMLP, self).__init__()
self.dense_1 = layers.Dense(1, activation=tf.nn.relu)
self.capsule = capsule()
self.dense_2 = layers.Dense(output_dim)
def call(self, inputs):
x = self.dense_1(inputs)
loss,x = self.capsule(inputs)
return self.dense_2(x)
mlp = SparseMLP(10)
#x_train=x_train.reshape(-1,28,28,1)
y = mlp(x_train)

To include a loss within a layer , you can use add_loss function of tf.keras.layers.Layer class. This fucntion takes a loss value and adds it up to the global loss function define in compile function.
you can call self.add_loss(loss_value) from inside the call method of a custom
layer.Losses added in this way get added to the "main" loss during training
(the one passed to compile()).
So to make ur model consider the losses from intermediate layer , you should uncomment the add_loss fn , and then train the model in usual way that you train.
Please mind that it is totally fine to not declare a "main" loss in the compile function as there already is a loss that ur defining in your layer class.
Note that when you pass losses via add_loss(), it becomes possible to call compile() without a loss function, since the model already has a loss to minimize.
Please note that call function of SparseMLP model , should look like this:
x = self.dense_1(inputs)
# i dunno if u desire to do this, that is pass inputs in capsule
# instead of x.Currently the output from dense_1 is not used at all .
# so keep in mind to make sure ur passing proper inputs to layers.
# and u do not have to call loss here as it will tracked internally by
# keras.
x = self.capsule(inputs)
return self.dense_2(x)
So running your model like below should do the trick:
model.compile(loss = "define ur main loss is there is" , metrics = "define ur metrics")
model.fit(x = train_inst , y = train_targets)

Related

Sigmoid vs Binary Cross Entropy Loss

In my torch model, the last layer is a torch.nn.Sigmoid() and the loss is the torch.nn.BCELoss.
In the training step, the following error has occurred:
RuntimeError: torch.nn.functional.binary_cross_entropy and torch.nn.BCELoss are unsafe to autocast.
Many models use a sigmoid layer right before the binary cross entropy layer.
In this case, combine the two layers using torch.nn.functional.binary_cross_entropy_with_logits
or torch.nn.BCEWithLogitsLoss. binary_cross_entropy_with_logits and BCEWithLogits are
safe to autocast.
However, when trying to reproduce this error while computing the loss and backpropagation, everything goes correctly:
import torch
from torch import nn
# last layer
sigmoid = nn.Sigmoid()
# loss
bce_loss = nn.BCELoss()
# the true classes
true_cls = torch.tensor([
[0.],
[1.]])
# model prediction classes
pred_cls = sigmoid(
torch.tensor([
[0.4949],
[0.4824]],requires_grad=True)
)
pred_cls
# tensor([[0.6213],
# [0.6183]], grad_fn=<SigmoidBackward>)
out = bce_loss(pred_cls, true_cls)
out
# tensor(0.7258, grad_fn=<BinaryCrossEntropyBackward>)
out.backward()
What am i missing?
I appreciate any help you can provide.
You have to move it to cuda first and enable the autocast, like this:
import torch
from torch import nn
from torch.cuda.amp import autocast
# last layer
sigmoid = nn.Sigmoid().cuda()
# loss
bce_loss = nn.BCELoss().cuda()
# the true classes
true_cls = torch.tensor([
[0.],
[1.]]).cuda()
with autocast():
# model prediction classes
pred_cls = sigmoid(
torch.tensor([
[0.4949],
[0.4824]], requires_grad=True
).cuda()
)
pred_cls
# tensor([[0.6213],
# [0.6183]], grad_fn=<SigmoidBackward>)
out = bce_loss(pred_cls, true_cls)
out
# tensor(0.7258, grad_fn=<BinaryCrossEntropyBackward>)
out.backward()
RuntimeError: torch.nn.functional.binary_cross_entropy and torch.nn.BCELoss are unsafe to autocast.
Many models use a sigmoid layer right before the binary cross entropy layer.
In this case, combine the two layers using torch.nn.functional.binary_cross_entropy_with_logits
or torch.nn.BCEWithLogitsLoss. binary_cross_entropy_with_logits and BCEWithLogits are
safe to autocast.

mse loss function not compatible with regularization loss (add_loss) on hidden layer output

I would like to code in tf.Keras a Neural Network with a couple of loss functions. One is a standard mse (mean squared error) with a factor loading, while the other is basically a regularization term on the output of a hidden layer. This second loss is added through self.add_loss() in a user-defined class inheriting from tf.keras.layers.Layer. I have a couple of questions (the first is more important though).
1) The error I get when trying to combine the two losses together is the following:
ValueError: Shapes must be equal rank, but are 0 and 1
From merging shape 0 with other shapes. for '{{node AddN}} = AddN[N=2, T=DT_FLOAT](loss/weighted_loss/value, model/new_layer/mul_1)' with input shapes: [], [100].
So it comes from the fact that the tensors which should add up to make one unique loss value have different shapes (and ranks). Still, when I try to print the losses during the training, I clearly see that the vectors returned as losses have shape batch_size and rank 1. Could it be that when the 2 losses are summed I have to provide them (or at least the loss of add_loss) as scalar? I know the mse is usually returned as a vector where each entry is the mse from one sample in the batch, hence having batch_size as shape. I think I tried to do the same with the "regularization" loss. Do you have an explanation for this behavio(u)r?
The sample code which gives me error is the following:
import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
def rate_mse(rate=1e5):
#tf.function # also needed for printing
def loss(y_true, y_pred):
tmp = rate*K.mean(K.square(y_pred - y_true), axis=-1)
# tf.print('shape %s and rank %s output in mse'%(K.shape(tmp), tf.rank(tmp)))
tf.print('shape and rank output in mse',[K.shape(tmp), tf.rank(tmp)])
tf.print('mse loss:',tmp) # print when I put tf.function
return tmp
return loss
class newLayer(tf.keras.layers.Layer):
def __init__(self, rate=5e-2, **kwargs):
super(newLayer, self).__init__(**kwargs)
self.rate = rate
# #tf.function # to be commented for NN training
def call(self, inputs):
tmp = self.rate*K.mean(inputs*inputs, axis=-1)
tf.print('shape and rank output in regularizer',[K.shape(tmp), tf.rank(tmp)])
tf.print('regularizer loss:',tmp)
self.add_loss(tmp, inputs=True)
return inputs
tot_n = 10000
xx = np.random.rand(tot_n,1)
yy = np.pi*xx
train_size = int(0.9*tot_n)
xx_train = xx[:train_size]; xx_val = xx[train_size:]
yy_train = yy[:train_size]; yy_val = yy[train_size:]
reg_layer = newLayer()
input_layer = Input(shape=(1,)) # input
hidden = Dense(20, activation='relu', input_shape=(2,))(input_layer) # hidden layer
hidden = reg_layer(hidden)
output_layer = Dense(1, activation='linear')(hidden)
model = Model(inputs=[input_layer], outputs=[output_layer])
model.compile(optimizer='Adam', loss=rate_mse(), experimental_run_tf_function=False)
#model.compile(optimizer='Adam', loss=None, experimental_run_tf_function=False)
model.fit(xx_train, yy_train, epochs=100, batch_size = 100,
validation_data=(xx_val,yy_val), verbose=1)
#new_xx = np.random.rand(10,1); new_yy = np.pi*new_xx
#model.evaluate(new_xx,new_yy)
print(model.predict(np.array([[1]])))
2) I would also have a secondary question related to this code. I noticed that printing with tf.print inside the function rate_mse only works with tf.function. Similarly, the call method of newLayer is only taken into consideration if the same decorator is commented during training. Can someone explain why this is the case or reference me to a possible solution?
Thanks in advance to whoever can provide me help. I am currently using Tensorflow 2.2.0 and keras version is 2.3.0-tf.
I stuck with the same problem for a few days. "Standard" loss is going to be a scalar at the moment when we add it to the loss from add_loss. The only way how I get it working is to add one more axis while calculating mean. So we will get a scalar, and it will work.
tmp = self.rate*K.mean(inputs*inputs, axis=[0, -1])

Is there a way to save a Keras model build in tensorflow 2.0 from Model Sub classing API?

Is there a way to save the entire model build using tf.keras Model subclassing API after the training is done? I know we can use save_weights to save the weights only, but is there a way to save the whole model so that I may use it for prediction later when I do not have the code available?
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
You can use the following steps for saving model after training, loading and inference:
Save Model after training
model.save(filepath="model")
# OR
tf.keras.models.save_model(model, filepath="model_")
Load Saved Model
loaded_model = tf.keras.models.load_model(filepath="model_")
Prediction using Loaded model
result = loaded_model.predict(test_db)

How to add BatchNormalization loss to gradient calculation in tensorflow 2.0 using keras subclass API

Using the keras subclass API it is easy enough to add a a batch normalization layer however the layer.losses list always appears empty. What is the correct method of including in the train loss when doing tape.gradient(loss, lossmodel.trainable_variables) where lossmodel is some separate keras subclass model defining a more complicated loss function that must include the gradient losses?
For example, this is minimal model with ONLY the batch norm layer. It has no loss AFAIK
class M(tf.keras.Model):
def __init__(self, axis):
super().__init__()
self.layer = tf.keras.layers.BatchNormalization(axis=axis, scale=False, center=True, virtual_batch_size=1, input_shape=(6,))
def call(self, x):
out = self.layer(x)
return out
m = M(1)
In [77]: m.layer.losses
Out[77]: []

Generative Adversarial Networks (GANs) in Keras - creating the combined model

I'm trying to create a pretty simple GANs model, and not sure how to combine the generator and the discriminator for training the generator
from keras import optimizers
from keras.layers import Input, Dense
from keras.models import Sequential, Model
import numpy as np
def build_generator(input_dim=10, output_dim=40, hidden_dim=28):
model = Sequential()
model.add(Dense(hidden_dim, input_dim=input_dim, activation='sigmoid', kernel_initializer="random_uniform"))
model.add(Dense(output_dim, activation='sigmoid', kernel_initializer="random_uniform"))
return model
def build_discriminator(input_dim=40, hidden_dim=28, output_dim=50):
input_d = Input(shape=(input_dim,))
encoded = Dense(hidden_dim, activation='sigmoid', kernel_initializer="random_uniform")(input_d)
decoded = Dense(output_dim, activation='sigmoid', kernel_initializer="random_uniform")(encoded)
x = Dense(1, activation='relu')(encoded)
y = Dense(1, activation='sigmoid')(encoded)
model = Model(inputs=input_d, outputs=[decoded, x, y])
return model
sgd = optimizers.SGD(lr=0.1)
generator = build_generator(10, 100, 70)
discriminator = build_discriminator(100, 60, 80)
generator.compile(loss='mean_squared_error', optimizer=sgd)
discriminator.trainable = True
discriminator.compile(loss='mean_squared_error', optimizer=sgd)
discriminator.trainable = False
Now I'm not sure how to combine them both, so the discriminator will receive the generator output and than will pass the generator back propagation data
For this, the best to do is to use the functional Model API. This is suited for more complex models, accepting branches, concatenations, etc.
(It's still possible, in this specific case to use the sequential models, but using the functional API always sounded better to me, for freedom and further experiments on the models)
So, you may preserve your two sequential models. All you have to do is to build a third model that contains these two.
generator = build_generator(....) #don't create a new generator, use the one you have.
discriminator = build_discriminator(....)
Now, a functional API model has its input shape defined like this:
inputTensor = Input(inputShape) #inputShape must be the same as in generator
And we work by passing inputs to layers and getting outputs:
#Getting the output of the generator given our input tensor:
genOut = generator(inputTensor) #you call a model just like you call a layer
#and we pass the generator's output to the discriminator, getting its output:
discOut = discriminator(genOut)
Finally, we create the actual model by defining its start and end points:
GAN = Model(inputTensor, discOut)
Use the model.layers[i].trainable parameter before compile to define which layers will be trainable or not in each of the models.
Combining the Generator & Discriminator models can, indeed, sometimes be quite confusing. I found this repository in the link below, which demonstrates quite well with a detailed code of how to construct multiple architectures of GANs in keras:
https://github.com/kochlisGit/Keras-GAN

Resources