Keras - Reuse weights from a previous layer - converting to keras tensor - keras

I am trying to reuse the weight matrix from a previous layer. As a toy example I want to do something like this:
import numpy as np
from keras.layers import Dense, Input
from keras.layers import merge
from keras import backend as K
from keras.models import Model
inputs = Input(shape=(4,))
inputs2 = Input(shape=(4,))
dense_layer = Dense(10, input_shape=(4,))
dense1 = dense_layer(inputs)
def my_fun(my_inputs):
w = my_inputs[0]
x = my_inputs[1]
return K.dot(w, x)
merge1 = merge([dense_layer.W, inputs2], mode=my_fun)
The problem is that dense_layer.W is not a keras tensor. So I get the following error:
Exception: Output tensors to a Model must be Keras tensors. Found: dot.0
Any idea on how to convert dense_layer.W to a Keras tensor?
Thanks

It seems that you want to share weights between layers.
I think You can use denselayer as shared layer for inputs and inputs2.
merge1=dense_layer(inputs2)
Do check out shared layers # https://keras.io/getting-started/functional-api-guide/#shared-layers

I don't think that you can use the merge layer like this.
But to answer your question, you will probably have to create a custom layer which has tied weights. Look at this example.
Otherwise, the way to access the weights of a layer is to use get_weights() method on that layer, this will retrun a list of numpy arrays containing the weights. For the case of the Dense layer, it will contain weights and bias.

There are two cases for the solution, depending on what you are trying to do:
You would like to share the W matrix between your two operations, and the W matrix for these two operations are kept the same even if its value changed during training or for some other reason. Then you should use dense.weights[0] which is the W matrix as a tensor from your dense layer.
If you are only going to use the value of W matrix at the time of your code is written and this value is never going to change, then use K.constant(dense.get_weights[0]) which extracts the weights as numpy array and is converted into tensor.

Related

How to increase the inputs of each layer in the neural network by a specific scale?

How to increase the inputs of each layer in the neural network by a specific scale?
I am working on a neural network with Keras and TensorFlow.
I'd like to implement some features in the neural network. During the training, I want to remove a specific range of input for each layer. For example
Let's say the input of the layer one is a range of [-2 2]. I'd like to make sure no input at [0 0.5]. So I'd like to add 0.5 to all the inputs whose value is at [0 0.5].
How could I do that? during the training process.
Thank you very much
You might try Lambda functions. An example implementation below. I hope this helps.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
def myClippingFunction(x):
y = tf.math.logical_and(tf.math.greater_equal(x, [[0]]), tf.math.less_equal(x, [[0.5]]) )
z = tf.where(y,x+0.5,x)
return z
#create simple model
inputA = layers.Input((1,))
x = layers.Lambda(myClippingFunction)(inputA)
myModel = keras.Model(inputs=inputA, outputs=x)
x_data = np.array([[-0.2],[0.6]])
myModel.predict(x_data)

Calling K.eval() on input_tensor inside keras custom loss function?

I'm trying to convert the input tensor to a numpy array inside a custom keras loss function, after following the instructions here.
The above code runs on my machine with no errors. Now, I want to extract a numpy array with values from the input tensor. However, I get the following error:
"tensorflow.python.framework.errors_impl.InvalidArgumentError: You
must feed a value for placeholder tensor 'input_1' with dtype float
[[Node: input_1 = Placeholderdtype=DT_FLOAT, shape=[],
_device="/job:localhost/replica:0/task:0/cpu:0"]]"
I need to convert to a numpy array because I have other keras models that must operate on the input - I haven't shown those lines below in joint_loss, but even the code sample below doesn't run at all.
import numpy as np
from keras.models import Model, Sequential
from keras.layers import Dense, Activation, Input
import keras.backend as K
def joint_loss_wrapper(x):
def joint_loss(y_true, y_pred):
x_val = K.eval(x)
return y_true - y_pred
return joint_loss
input_tensor = Input(shape=(6,))
hidden1 = Dense(30, activation='relu')(input_tensor)
hidden2 = Dense(40, activation='sigmoid')(hidden1)
out = Dense(1, activation='sigmoid')(hidden2)
model = Model(input_tensor, out)
model.compile(loss=joint_loss_wrapper(input_tensor), optimizer='adam')
I figured it out!
What you want to do is use the Functional API for Keras.
Then your submodels outputs as tensors can be obtained as y_pred_submodel = submodel(x).
This is similar to how a Keras layer operates on a tensor.
Manipulate only tensors within the loss function. That should work fine.

model.predict in keras using universal sentence encoder giving shape error

I am using keras model.predict to predict sentiments. I am using universal sentence embeddings. While predicting, I am getting the error described below.
Please provide your valuable insights.
Regards.
I have run the code for two sets of inputs. For say, input1, the prediction is obtained. While its not working for input 2.
Input 1 is the form : {(a1,[sents1]),....}
Input 2:{((a1,a2),[sents11])),...}
The input for predicting is the [sents1], [sents11] etc. extracted from this.
I could see the related question in (Keras model.predict function giving input shape error). But I don't know whether its resolved. Further, input1 is working.
import tensorflow as tf
import keras.backend as K
from keras import layers
from keras.models import Model
import numpy as np
def UniversalEmbedding(x):
return embed(tf.squeeze(tf.cast(x, tf.string)), signature="default", as_dict=True)["default"]
input_text = layers.Input(shape=(1,), dtype=tf.string)
embedding = layers.Lambda(UniversalEmbedding, output_shape=(embed_size,))(input_text)
dense = layers.Dense(256, activation='relu')(embedding)
pred = layers.Dense(category_counts, activation='softmax')(dense)
model = Model(inputs=[input_text], outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
sents1=list(input2.items())
with tf.Session() as session:
K.set_session(session)
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
# model.load_weights(.//)
for i,ch in enumerate(sents1):
new_text=ch[1]
if len(new_text)>1:
new_text = np.array(new_text, dtype=object)[:, np.newaxis]
predicts = model.predict(new_text, batch_size=32)
InvalidArgumentError: input must be a vector, got shape: [] [[{{node
lambda_2/module_1_apply_default/tokenize/StringSplit}} =
StringSplit[skip_empty=true,
_device="/job:localhost/replica:0/task:0/device:CPU:0"](lambda_2/module_1_apply_default/RegexReplace_1,
lambda_2/module_1_apply_default/tokenize/Const)]]
Try removing trailing blanks at the start of the sentence.
new_text.strip()
USE preprocessed sentences by splitting on space, creating some empty lists from trailing spaces, which cannot be embedded.
(Hope this answer is not too late)
Also could be some missing values in sentences, without text. Need to exclude these.

sample_weight parameter shape error in scikit-learn GridSearchCV

Passing the sample_weight parameter to GridSearchCV raises an error due to incorrect shape. My suspicion is that cross validation is not capable of handling the split of sample_weights accordingly with the dataset.
First part: Using sample_weight as a model parameter works beautifully
Let's consider a simple example, first without GridSearch:
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
dataURL = 'https://raw.githubusercontent.com/mcasl/PAELLA/master/data/sinusoidal_data.csv'
x = pd.read_csv(dataURL, usecols=["x"]).x
y = pd.read_csv(dataURL, usecols=["y"]).y
occurrences = pd.read_csv(dataURL, usecols=["Occurrences"]).Occurrences
my_sample_weights = (1 - occurrences/10000)**3
my_sample_weights contains the importance that I assign to each observation in x, y, as the following picture shows. The points of the sinusoidal curve get higher weights than those forming the background noise.
plt.scatter(x, y, c=my_sample_weights>0.9, cmap="cool")
Let's train a neural network, first without using the information contained in my_sample_weights:
def make_model(number_of_hidden_neurons=1):
model = Sequential()
model.add(Dense(number_of_hidden_neurons, input_shape=(1,), activation='tanh'))
model.add(Dense(1, activation='linear'))
model.compile(optimizer='sgd', loss='mse')
return model
net_Not_using_sample_weight = make_model(number_of_hidden_neurons=6)
net_Not_using_sample_weight.fit(x,y, epochs=1000)
plt.scatter(x, y, )
plt.scatter(x, net_Not_using_sample_weight.predict(x), c="green")
As the following picture shows, the neural network tries to fit the shape of the sinusoidal but the background noise prevents it from a good fit.
Now, using the information of my_sample_weights , the quality of the prediction is a much better one.
Second part: Using sample_weight as a GridSearchCV parameter raises an error
my_Regressor = KerasRegressor(make_model)
validator = GridSearchCV(my_Regressor,
param_grid={'number_of_hidden_neurons': range(4, 5),
'epochs': [500],
},
fit_params={'sample_weight': [ my_sample_weights ]},
n_jobs=1,
)
validator.fit(x, y)
Trying to pass the sample_weights as a parameter gives the following error:
...
ValueError: Found a sample_weight array with shape (1000,) for an input with shape (666, 1). sample_weight cannot be broadcast.
It seems that the sample_weight vector has not been split in a similar manner to the input array.
For what is worth:
import sklearn
print(sklearn.__version__)
0.18.1
import keras
print(keras.__version__)
2.0.5
The problem is that as a standard, the GridSearch uses 3-fold cross-validation, unless explicity stated otherwise. This means that 2/3 data points of the data are used as training data and 1/3 for cross-validation, which does fit the error message. The input shape of 1000 of the fit_params doesn't match the number of training examples used for training (666). Adjust the size and the code will run.
my_sample_weights = np.random.uniform(size=666)
We developed PipeGraph, an extension to Scikit-Learn Pipeline that allows you to get intermediate data, build graph like workflows, and in particular, solve this problem (see the examples in the gallery at http://mcasl.github.io/PipeGraph )

Strange behaviour sequence to sequence learning for variable length sequences

I am training a sequence to sequence model for variable length sequences with Keras, but I am running into some unexpected problems. It is unclear to me whether the behaviour I am observing is the desired behaviour of the library and why it would be.
Model Creation
I've made a recurrent model with an embeddings layer and a GRU recurrent layer that illustrates the problem. I used mask_zero=0.0 for the embeddings layer instead of a masking layer, but changing this doesn't seem to make a difference (nor does adding a masking layer before the output):
import numpy
from keras.layers import Embedding, GRU, TimeDistributed, Dense, Input
from keras.models import Model
import keras.preprocessing.sequence
numpy.random.seed(0)
input_layer = Input(shape=(3,), dtype='int32', name='input')
embeddings = Embedding(input_dim=20, output_dim=2, input_length=3, mask_zero=True, name='embeddings')(input_layer)
recurrent = GRU(5, return_sequences=True, name='GRU')(embeddings)
output_layer = TimeDistributed(Dense(1), name='output')(recurrent)
model = Model(input=input_layer, output=output_layer)
output_weights = model.layers[-1].get_weights()
output_weights[1] = numpy.array([0.2])
model.layers[-1].set_weights(output_weights)
model.compile(loss='mse', metrics=['mse'], optimizer='adam', sample_weight_mode='temporal')
I use masking and the sample_weight parameter to exclude the padding values from the training/evaluation. I will test this model on one input/output sequence which I pad using the Keras padding function:
X = [[1, 2]]
X_padded = keras.preprocessing.sequence.pad_sequences(X, dtype='float32', maxlen=3)
Y = [[[1], [2]]]
Y_padded = keras.preprocessing.sequence.pad_sequences(Y, maxlen=3, dtype='float32')
Output Shape
Why the output is expected to be formatted in this way. Why can I not use input/output sequences that have exactly the same dimensionality? model.evaluate(X_padded, Y_padded) gives me a dimensionality error.
Then, when I run model.predict(X_padded) I get the following output (with numpy.random.seed(0) before generating the model):
[[[ 0.2 ]
[ 0.19946882]
[ 0.19175649]]]
Why isn't the first input masked for the output layer? Is the output_value computed anyways (and equal to the bias, as the hidden layer values are 0? This does not seem desirable. Adding a Masking layer before the output layer does not solve this problem.
MSE calculation
Then, when I evaluate the model (model.evaluate(X_padded, Y_padded)), this returns the Mean Squared Error (MSE) of the entire sequence (1.3168) including this first value, which I suppose is to be expected when it isn't masked, but not what I would want.
From the Keras documentation I understand I should use the sample_weight parameter to solve this problem, which I tried:
sample_weight = numpy.array([[0, 1, 1]])
model_evaluation = model.evaluate(X_padded, Y_padded, sample_weight=sample_weight)
print model.metrics_names, model_evaluation
The output I get is
['loss', 'mean_squared_error'] [2.9329459667205811, 1.3168648481369019]
This leaves the metric (MSE) unaltered, it is still the MSE over all values, including the one that I wanted masked. Why? This is not what I want when I evaluate my model. It does cause a change in the loss value, which appears to be the MSE over the last two values normalised to not give more weight to longer sequences.
Am I doing something wrong with the sample weights? Also, I can really not figure out how this loss value came about. What should I do to exclude the padded values from both training and evaluation (I assume the sample_weight parameter works the same in the fit function).
It was indeed a bug in the library, in Keras 2 this issue is resolved.

Resources