Element-wise multiplication with Keras - keras

I have a RGB image of shape (256,256,3) and I have a weight mask of shape (256,256). How do I perform the element-wise multiplication between them with Keras? (all channels share the same mask)

You need a Reshape so both tensors have the same number of dimensions, and a Multiply layer
mask = Reshape((256,256,1))(mask)
out = Multiply()([image,mask])
If you have variable shapes, you can use a single Lambda layer like this:
import keras.backend as K
def multiply(x):
image,mask = x
mask = K.expand_dims(mask, axis=-1) #could be K.stack([mask]*3, axis=-1) too
return mask*image
out = Lambda(multiply)([image,mask])

As an alternative you can do this using a Lambda layer (as in #DanielMöller's answer you need to add a third axis to the mask):
from keras import backend as K
out = Lambda(lambda x: x[0] * K.expand_dims(x[1], axis=-1))([image, mask])

Related

Output of the model depends on the shape of the weights tensor

I want to train the model to sum the three inputs. So it is as simple as possible.
Firstly the weights are initialized randomly. It produces bad error estimate (approx. 0.5)
Then I initialize the weights with zeros. There are two options:
the shape of the weights tensor is [1, 3]
the shape of the weights tensor is [3]
When I choose the 1st option the model still works bad and can't learn this simple formula.
When I choose the 2nd option it works perfect with the error of 10e-12.
Why the result depends on the shape of the weights? Why do I need to initialize the model with zeros to solve this simple problem?
import torch
from torch.nn import Sequential as Seq, Linear as Lin
from torch.optim.lr_scheduler import ReduceLROnPlateau
X = torch.rand((1024, 3))
y = (X[:,0] + X[:,1] + X[:,2])
m = Seq(Lin(3, 1, bias=False))
# 1 option
m[0].weight = torch.nn.parameter.Parameter(torch.tensor([[0, 0, 0]], dtype=torch.float))
# 2 option
#m[0].weight = torch.nn.parameter.Parameter(torch.tensor([0, 0, 0], dtype=torch.float))
optim = torch.optim.SGD(m.parameters(), lr=10e-2)
scheduler = ReduceLROnPlateau(optim, 'min', factor=0.5, patience=20, verbose=True)
mse = torch.nn.MSELoss()
for epoch in range(500):
optim.zero_grad()
out = m(X)
loss = mse(out, y)
loss.backward()
optim.step()
if epoch % 20 == 0:
print(loss.item())
scheduler.step(loss)
First option doesn't learning because it fails with broadcasting: while out.shape == (1024, 1) corresponding targets y has shape of (1024, ). MSELoss, as expected, computes mean of tensor (out - y)^2, which in this case has shape (1024, 1024), clearly wrong objective for this task. At the same time, after applying 2-nd option tensor (out - y)^2 has size (1024, ) and mean of it corresponds to actual mse. Default approach, without explicit changing weights shape (through option 1 and 2), would work if set target shape to (1024, 1) for example by y = y.unsqueeze(-1) after definition of y.

Train for a parameter in the weight matrix in Tensorflow

I have a neural network. For simplicity, there's only one layer and the weight matrix is of shape 2-by-2. I need the output of the network to be the rotated version of the input, i.e., the matrix should be a valid rotation matrix. I have tried the following:
def rotate(val):
w1 = tf.constant_initializer([[cos45, -sin45], [sin45, cos45]])
return tf.layers.dense(inputs=val, units=2, kernel_initializer=w1, activation=tf.nn.tanh)
While training, I do not want to lose properties of the rotation matrix. In other words, I need the layer(s) to estimate only the angle (argument) of trigonometric functions in the matrix.
I read that kernel_constraint can help in this aspect, by normalizing the values. But applying kernel_constraint does not guarantee diagonal entries being equal and the off diagonal entries being negatives of each other (in this case). In general, the two properties that need to be satisfied are, the determinant should be 1 and R^T*R = I.
Is there any other way to achieve this?
You could define your custom Keras layer. Something along the lines of:
from tensorflow.keras.layers import Layer
import tensorflow as tf
class Rotate(Layer):
def build(self, input_shape):
sh = input_shape[0]
shape = [sh, sh]
# Initial weight matrix
w = self.add_weight(shape=shape,
initializer='random_uniform')
# Set upper diagonal elements to negative of lower diagonal elements
mask = tf.cast(tf.linalg.band_part(tf.ones(shape), -1, 0), tf.float32)
w = mask * w
w -= tf.transpose(w)
# Set the same weight to the diagonal
diag_mask = 1 - tf.linalg.diag(tf.ones(sh))
w = diag_mask * w
diag_w = self.add_weight(shape=(1,),
initializer='random_uniform')
diagonal = tf.linalg.diag(tf.ones(sh)) * diag_w
self.kernel = w + diagonal
def call(self, inputs, **kwargs):
return tf.matmul(inputs, self.kernel)
Note that the matrix of learnable weights self.kernel has this aspect: [[D, -L], [L, D]]

What are inputs of Keras layers and custom functions?

Sorry for a nub's question:
Having the NN that is trained in fit_generator mode, say something like:
Lambda(...)
or
Dense(...)
and the custom loss function, what are input tensors?
Am I correct expecting (batch size, previous layer's output) in case of a Lambda layer?
Is it going to be the same (batch size, data) in case of a custom loss function that looks like:
triplet_loss(y_true, y_pred)
Are y_true, y_pred in format (batch,previous layer's output) and (batch, true 'expected' data we fed to NN)?
I would probaly duplicate the dense layers. Instead of having 2 layers with 128 units, have 4 layers with 64 units. The result is the same, but you will be able to perform the cross products better.
from keras.models import Model
#create dense layers and store their output tensors, they use the output of models 1 and to as input
d1 = Dense(64, ....)(Model_1.output)
d2 = Dense(64, ....)(Model_1.output)
d3 = Dense(64, ....)(Model_2.output)
d4 = Dense(64, ....)(Model_2.output)
cross1 = Lambda(myFunc, output_shape=....)([d1,d4])
cross2 = Lambda(myFunc, output_shape=....)([d2,d3])
#I don't really know what kind of "merge" you want, so I used concatenate, there are
Add, Multiply and others....
output = Concatenate()([cross1,cross2])
#use the "axis" attribute of the concatenate layer to define better which axis will
be doubled due to the concatenation
model = Model([Model_1.input,Model_2.input], output)
Now, for the lambda function:
import keras.backend as K
def myFunc(x):
return x[0] * x[1]
custom loss function, what are input tensors?
It depends on how you define your model outputs.
For example, let's define a simple model that returns the input unchanged.
model = Sequential([Lambda(lambda x: x, input_shape=(1,))])
Let's use dummy input X and label Y
x = [[0]]
x = np.array(x)
y = [[4]]
y = np.array(y)
If our custom loss function looks like this
def mce(y_true, y_pred):
print(y_true.shape)
print(y_pred.shape)
return K.mean(K.pow(K.abs(y_true - y_pred), 3))
model.compile('sgd', mce)
and then we can see the shape of y_true and y_pred will be
y_true: (?, ?)
y_pred: (?, 1)
However, for triplet loss the input for the loss function also can be received like this-
ALPHA = 0.2
def triplet_loss(x):
anchor, positive, negative = x
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1)
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1)
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), ALPHA)
loss = tf.reduce_mean(tf.maximum(basic_loss, 0.0), 0)
return loss
# Source: https://github.com/davidsandberg/facenet/blob/master/src/facenet.py
def build_model(input_shape):
# Standardizing the input shape order
K.set_image_dim_ordering('th')
positive_example = Input(shape=input_shape)
negative_example = Input(shape=input_shape)
anchor_example = Input(shape=input_shape)
# Create Common network to share the weights along different examples (+/-/Anchor)
embedding_network = faceRecoModel(input_shape)
positive_embedding = embedding_network(positive_example)
negative_embedding = embedding_network(negative_example)
anchor_embedding = embedding_network(anchor_example)
loss = merge([anchor_embedding, positive_embedding, negative_embedding],
mode=triplet_loss, output_shape=(1,))
model = Model(inputs=[anchor_example, positive_example, negative_example],
outputs=loss)
model.compile(loss='mean_absolute_error', optimizer=Adam())
return model

Thresholded Linear Layer based on Maximum and Minimum of output values

I am working on a neural network architecture which has a linear layer, and I need the output of the layer to be same as input if it is above a certain threshold, i.e
a(x) = x if x >= threshold else a(x) = 0 if x < threshold
And the linear layer is as follows:
t = Dense(100)
Therefore, I am using the ThresholdedReLU layer after the Dense layer in keras. The threshold is such that it depends on the maximum and minimum of the output values of the Dense layer:
threshold = delta*min{s} + (1-delta)*max{s}
where min{s} is the minimum of the 100 output values of the Dense layer
and max{s} is the maximum of the 100 output values of the Dense layer
and delta is a value between [0,1]
Is there a way I could obtain the maximum and minimum values, calculate the threshold after each epoch and batch update, and hence obtain the thresholded output
You could define a Lambda layer and use backend functions within it. Here's how I would do it:
from keras.layers import Dense, Lambda
from keras.models import Sequential
import keras.backend as K
import numpy as np
def thresholded_relu(x, delta):
threshold = delta * K.min(x, axis=-1) + (1 - delta) * K.max(x, axis=-1)
return K.cast((x > threshold[:, None]), dtype=K.dtype(x)) * x
delta = 0.5
model = Sequential()
# model.add(Dense(100, input_shape=(100,)))
model.add(Lambda(lambda x: thresholded_relu(x, delta), input_shape=(100,)))
model.compile('sgd', 'mse')
x = np.arange(0, 100, 1)[None, :]
pred = model.predict(x)
for y, p in zip(x[0], pred[0]):
print('Input: {}. Pred: {}'.format(y, p))

How do I take the squared difference of two Keras tensors?

I have a Keras Model which calculates two tensors, r1 and r2 of the same shape. I would like to have the model calculate (r1 - r2)**2.
I can take the sum of these tensors with keras.layers.add(r1, r2). I can take a product with keras.layers.multiply(r1, r2). If there was a subtract function, I'd write
r = keras.layers.subtract(r1, r2)
square_diff = keras.layers.multiply(r, r)
but there doesn't appear to be a keras.layers.subtract function.
In lieu of that I've been trying to figure out how to multiply one of my inputs by a constant -1 tensor and then adding, but I can't figure out how to create that -1 tensor. I've tried a number of variants on
negative_one = keras.backend.constant(np.full(r1.get_shape()), -1)
none of which work. Presumably because the dimensionality of r1 is (?, 128) (i.e. the first dimension is a batch size, and the second represents 128 hidden elements.)
What is the correct way in Keras to take the difference of two tensors?
As dhinckley mentioned, you should use Lambda layer. But I would suggest to define your custom function first. With this code will a little bit more clear:
import keras.backend as K
from keras.layers import Lambda
def squared_differences(pair_of_tensors):
x, y = pair_of_tensors
return K.square(x - y)
square_diff = Lambda(squared_differences)([r1, r2])
I'm not qualified to say whether or not this is the correct way, but the following code will calculate (r1 - r2)**2 as you request. The key enabler here is the use of the Keras functional API and Lambda layers to invert the sign of an input tensor.
import numpy as np
from keras.layers import Input, Lambda
from keras.models import Model
from keras.layers import add
r1 = Input(shape=(1,2,2))
r2 = Input(shape=(1,2,2))
# Lambda for subtracting two tensors
minus_r2 = Lambda(lambda x: -x)(r2)
subtracted = add([r1,minus_r2])
out= Lambda(lambda x: x**2)(subtracted)
model = Model([r1,r2],out)
a = np.arange(4).reshape([1,1,2,2])
b = np.ones(4).reshape([1,1,2,2])
print(model.predict([a,b]))
# [[[[ 1. 0.]
# [ 1. 4.]]]]
print((a-b)**2)
# [[[[ 1. 0.]
# [ 1. 4.]]]]

Resources