I am struggling with scaling the output of a lambda layer. The code is as follows:
My X_train is 100*15*24 and Y_train is 100*1 (the network consists with a LSTM layer + Dense layer)
input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1)
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)
## custom lambda layer/ loss function ##
def custom_layer(new_input):
add_input = new_input[0]+new_input[1]
#below three lines are where problem occurs that makes the program does not work
###############################################
scaler = MinMaxScaler()
scaler.fit(add_input)
normalized = scaler.transform(add_input)
###############################################
return normalized
lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])
model = Model(inputs=data_input, outputs=lambda_layer)
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
model.fit(X_train, Y_train, epochs=2, batch_size=216)
How can I normalize the output of lambda_layer properly? Any ideas or suggestions are appreciated!
I don't think Scikit transformers would work in Lambda layers. If you're only interested in getting the normalized output w.r.t the data passed in, you can do,
from tensorflow.keras.layers import Input, LSTM, Dense, Lambda
from tensorflow.keras.models import Model
import tensorflow as tf
timesteps = 3
num_feat = 12
input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1)
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)
## custom lambda layer/ loss function ##
def custom_layer(new_input):
add_input = new_input[0]+new_input[1]
normalized = (add_input - tf.reduce_min(add_input, axis=0, keepdims=True))/(tf.reduce_max(add_input, axis=0, keepdims=True) - tf.reduce_max(add_input, axis=0, keepdims=True))
return normalized
lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])
model = Model(inputs=data_input, outputs=lambda_layer)
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
Related
I need to calculate the gradient of the validation error w.r.t inputs x. I'm trying to see how much the validation error changes when I perturb one of the training samples.
The validation error (E) explicitly depends on the model weights (W).
The model weights explicitly depend on the inputs (x and y).
Therefore, the validation error implicitly depends on the inputs.
I'm trying to calculate the gradient of E w.r.t x directly.
An alternative approach would be to calculate the gradient of E w.r.t W (can easily be calculated) and the gradient of W w.r.t x (cannot do at the moment), which would allow the gradient of E w.r.t x to be calculated.
I have attached a toy example. Thanks in advance!
import numpy as np
import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
import tensorflow as tf
from autograd import grad
train_images = mnist.train_images()
train_labels = mnist.train_labels()
test_images = mnist.test_images()
test_labels = mnist.test_labels()
# Normalize the images.
train_images = (train_images / 255) - 0.5
test_images = (test_images / 255) - 0.5
# Flatten the images.
train_images = train_images.reshape((-1, 784))
test_images = test_images.reshape((-1, 784))
# Build the model.
model = Sequential([
Dense(64, activation='relu', input_shape=(784,)),
Dense(64, activation='relu'),
Dense(10, activation='softmax'),
])
# Compile the model.
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
# Train the model.
model.fit(
train_images,
to_categorical(train_labels),
epochs=5,
batch_size=32,
)
model.save_weights('model.h5')
# Load the model's saved weights.
# model.load_weights('model.h5')
calculate_mse = tf.keras.losses.MeanSquaredError()
test_x = test_images[:5]
test_y = to_categorical(test_labels)[:5]
train_x = train_images[:1]
train_y = to_categorical(train_labels)[:1]
train_y = tf.convert_to_tensor(train_y, np.float32)
train_x = tf.convert_to_tensor(train_x, np.float64)
with tf.GradientTape() as tape:
tape.watch(train_x)
model.fit(train_x, train_y, epochs=1, verbose=0)
valid_y_hat = model(test_x, training=False)
mse = calculate_mse(test_y, valid_y_hat)
de_dx = tape.gradient(mse, train_x)
print(de_dx)
# approach 2 - does not run
def calculate_validation_mse(x):
model.fit(x, train_y, epochs=1, verbose=0)
valid_y_hat = model(test_x, training=False)
mse = calculate_mse(test_y, valid_y_hat)
return mse
train_x = train_images[:1]
train_y = to_categorical(train_labels)[:1]
validation_gradient = grad(calculate_validation_mse)
de_dx = validation_gradient(train_x)
print(de_dx)
Here's how you can do this. Derivation is as below.
Few things to note,
I have reduced the feature size from 784 to 256 as I was running out of memory in colab (line marked in the code) . Might have to do some mem profiling to find out why
Only computed grads for the first layer. Easily extendable to other layers
Disclaimer: this derivation is correct to best of my knowledge. Please do some research and verify that it is the case. You will run into memory issues for larger inputs and layer sizes.
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
import tensorflow as tf
f = 256
model = Sequential([
Dense(64, activation='relu', input_shape=(f,)),
Dense(64, activation='relu'),
Dense(10, activation='softmax'),
])
# Compile the model.
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
w = model.weights[0]
# Inputs and labels
x_tr = tf.Variable(np.random.normal(size=(1,f)), shape=(1, f), dtype='float32')
y_tr = np.random.choice([0,1,2,3,4,5,6,7,8,9], size=(1,1))
y_tr_onehot = tf.keras.utils.to_categorical(y_tr, num_classes=10).astype('float32')
x_v = tf.Variable(np.random.normal(size=(1,f)), shape=(1, f), dtype='float32')
y_v = np.random.choice([0,1,2,3,4,5,6,7,8,9], size=(1,1))
y_v_onehot = tf.keras.utils.to_categorical(y_v, num_classes=10).astype('float32')
# In the context of GradientTape
with tf.GradientTape() as tape1:
with tf.GradientTape() as tape2:
y_tr_pred = model(x_tr)
tr_loss = tf.keras.losses.MeanSquaredError()(y_tr_onehot, y_tr_pred)
tmp_g = tape2.gradient(tr_loss, w)
print(tmp_g.shape)
# d(dE_tr/d(theta))/dx
# Warning this step consumes lot of memory for large layers
lr = 0.001
grads_1 = -lr * tape1.jacobian(tmp_g, x_tr)
with tf.GradientTape() as tape3:
y_v_pred = model(x_v)
v_loss = tf.keras.losses.MeanSquaredError()(y_v_onehot, y_v_pred)
# dE_val/d(theta)
grads_2 = tape3.gradient(v_loss, w)[tf.newaxis, :]
# Just crunching the dimension to get the final desired shape of (1,256)
grad = tf.matmul(tf.reshape(grads_2,[1, -1]), tf.reshape(tf.transpose(grads_1,[2,1,0,3]),[1, -1, 256]))
I'm using Keras to fit a function, and I'm new to Keras.
With a very simple network, the Keras can fit my function very well, I just want to know what the function is and try to understand why it works very well. But the "predict" function hide the details.
Here is the code I create the network:
import numpy as np
import tensorflow as tf
from tensorflow import keras
LABEL_COLUMN = "shat"
BATCH_SIZE = 16
EPOCHS = 20
trainfilePath = "F:\\PyworkingFolder\\WWSHat\\_Data\\alpha0train.csv"
testfilePath = "F:\\PyworkingFolder\\WWSHat\\_Data\\alpha0test.csv"
with open(trainfilePath, encoding='utf-8') as txtContent:
trainArray = np.loadtxt(txtContent, delimiter=",")
with open(testfilePath, encoding='utf-8') as txtContent:
testArray = np.loadtxt(txtContent, delimiter=",")
trainSample = trainArray[:, 0:14]
trainLable = trainArray[:, 14]
testSample = testArray[:, 0:14]
testLable = testArray[:, 14]
model = keras.Sequential([
keras.layers.Dense(14, activation='relu', input_shape=[14]),
keras.layers.Dense(15, activation='relu'),
keras.layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
# optimizer = keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=None, decay=0.0)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
model.summary()
history = model.fit(trainSample, trainLable, epochs=EPOCHS, batch_size=BATCH_SIZE)
model.evaluate(testSample, testLable, verbose=1)
model.save("F:\\PyworkingFolder\\WWSHat\\_Data\\alpha0.h5")
What I understand is:
the layers are weight matrices and basis matrices, it works as
out=max(0, weight * input + basis)
After some search, I find I can read the .h5 file using
import h5py
import numpy as np
FILENAME = "F:\\PyworkingFolder\\WWSHat\\_Data\\alpha0.h5"
with h5py.File(FILENAME, 'r') as f:
dense_1 = f['/model_weights/dense_1/dense_1']
dense_1_bias = dense_1['bias:0'][:]
dense_1_kernel = dense_1['kernel:0'][:]
dense_2 = f['/model_weights/dense_2/dense_2']
dense_2_bias = dense_2['bias:0'][:]
dense_2_kernel = dense_2['kernel:0'][:]
# print("Weight matrix 1:\n")
# print(dense_1_kernel)
# print("Basis matrix 1:\n")
# print(dense_1_bias)
# print("Weight matrix 2:\n")
# print(dense_2_kernel)
# print("Basis matrix 2:\n")
# print(dense_2_bias)
def layer_output(v, kernel, bias):
return np.dot(v, kernel) + bias
reluFunction = np.vectorize(lambda x: x if x >= 0.0 else 0.0)
testV = np.array([[-0.004090321213057993,
0.009615388501909157,
-0.24223693596921558,
0.015504079563927319,
-0.02659541428995062,
0.018512968977547152,
0.00836788544720289,
-0.10874776132746002,
-0.045863474556415526,
-0.010195799916571194,
0.09474219315939948,
0.03606698737846194,
-0.004560110004741025,
0.028042417959738858]])
output_1 = layer_output(testV, dense_1_kernel, dense_1_bias)
output_2 = reluFunction(output_1)
output_3 = layer_output(output_2, dense_2_kernel, dense_2_bias)
output_4 = reluFunction(output_3)
however, the result of output_4 is very different from what I get using
loaded_model = keras.models.load_model("F:\\PyworkingFolder\\WWSHat\\_Data\\alpha0.h5")
predicted = loaded_model(testV)
The "predicted" is very close to the ground truth while "output_4" is far away from the ground truth.
I get stuck here and don't know why and failed to find information about how to extract the function I want from the Keras model, I need your help!
Thanks!
model = keras.Sequential([
keras.layers.Dense(14, activation='relu', input_shape=[14]),
keras.layers.Dense(15, activation='relu'),
keras.layers.Dense(1)
])
In your model, there are 3 layers, the last dense layer has weight and biases too, you didn't consider them in your calculation.
I'm trying to write a simple FGSM attack on MNIST. I tried the foolbox library and it seems to work but the FGSM is very slow (perhaps because it searches for a minimum eps/perturbation which gives a different target label). I started writing my own and my code always gives me zero perturbation. I.e., if I plot the x_adversarial it is the same as x_input. I checked that the gradient computation leads all zero matrix. The computed loss function is small, but I imagine there is some gradient to this loss function. Can somebody please help me? I have been racking my head for a week now without any progress. Thanks again!
import tensorflow as tf
import numpy as np
import foolbox
import tensorflow.keras.backend as K
import matplotlib.pyplot as plt
# Importing the required Keras modules containing model and layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Dropout, Flatten, MaxPooling2D
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
num_digits_to_classify = 10
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
input_shape = (28, 28, 1)
# Making sure that the values are float so that we can get decimal points after division
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
# Normalizing the RGB codes by dividing it to the max RGB value.
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print('Number of images in x_train', x_train.shape[0])
print('Number of images in x_test', x_test.shape[0])
def create_model_deep():
model = Sequential()
model.add(Conv2D(32, kernel_size=(5,5), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64,kernel_size=(5,5),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten()) # Flattening the 2D arrays for fully connected layers
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_digits_to_classify,activation='softmax'))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
model = create_model_deep()
model.summary()
model.fit(x=x_train,y=y_train, epochs=10)
model.evaluate(x_test, y_test)
########################## foolbox FGSM attack ###############################################
from keras.backend import set_learning_phase
set_learning_phase(0)
from foolbox.criteria import Misclassification
fmodel = foolbox.models.TensorFlowModel.from_keras(model, bounds=(0,1))
attack = foolbox.attacks.FGSM(fmodel, criterion=Misclassification())
fgsm_error = 0.0
for i in range(x_test.shape[0]):
if i%1000 == 0:
print(i)
adversarial = attack(x_test[i],y_test[i])
if adversarial is not None:
adversarial = adversarial.reshape(1,28,28,1)
model_predictions = model.predict(adversarial)
label = np.argmax(model_predictions)
if label != y_test[i]:
fgsm_error = fgsm_error + 1.0
fgsm_error = fgsm_error/x_test.shape[0]
########################## My own FGSM attack ###############################################
sess = K.get_session()
eps = 0.3
x_adv = tf.placeholder(tf.float32,shape=(1,28,28,1),name="adv_example")
x_noise = tf.placeholder(tf.float32,shape=(1,28,28,1),name="adv_noise")
x_input = x_test[0].reshape(1,28,28,1)
y_input = y_test[0]
def loss_fn(y_true, y_pred):
return K.sparse_categorical_crossentropy(y_true, y_pred)
grad = K.gradients(loss_fn(y_input,model.output), model.input)
delta = K.sign(grad[0])
x_noise = x_noise + delta
x_adv = x_adv + eps*delta
x_adv = K.clip(x_adv,0.0,1.0)
x_adv, x_noise, grad = sess.run([x_adv, x_noise, grad], feed_dict={model.input:x_input, x_adv:x_input, x_noise:np.zeros_like(x_input)})
pred = model.predict(x_adv)
The following code seems to work now. Please look at the comment I made below.
sess = K.get_session()
eps = 0.3
i = 100
x_input = x_test[i].reshape(1,28,28,1)
y_input = y_test[i]
x_adv = x_input
# Added noise
x_noise = np.zeros_like(x_input)
def loss_fn(y_true, y_pred):
target = K.one_hot(y_true,10)
loss = K.categorical_crossentropy(target, y_pred)
return loss
#loss = K.print_tensor(loss,message='loss = ')
#return K.sparse_categorical_crossentropy(y_true, y_pred)
def loss_fn_sparse(y_true, y_pred):
loss = K.sparse_categorical_crossentropy(y_true, y_pred)
return loss
image = K.cast(x_input,dtype='float32')
y_pred = model(image)
loss = loss_fn_sparse(y_input, y_pred)
grad = K.gradients(loss, image)
delta = K.sign(grad[0])
x_noise = x_noise + delta
x_adv = x_adv + eps*delta
x_adv = K.clip(x_adv,0.0,1.0)
x_adv, x_noise = sess.run([x_adv, x_noise], feed_dict={model.input:x_input})
pred = model.predict(x_adv)
I'm worked on sentiment analysis task using universal sentence encoder embed_size=512 with CNN but have an error says: Input 0 is incompatible with layer conv1d_6: expected ndim=3, found ndim=2.
and wanna know if this is right to add universal sentence encoder with CNN in this way or not?
pickle_in=open("X.pickle","rb")
X=pickle.load(pickle_in)
X = X.tolist() #convert x to list as The embedding code works once I
converted
the pandas.series data type to list.
X = np.array(X, dtype=object)[:, np.newaxis]
pickle_in=open("Y.pickle","rb")
Y=pickle.load(pickle_in)
Y = np.asarray(pd.get_dummies(Y), dtype = np.int8)
import tensorflow as tf
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
embed = hub.Module(module_url)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.15,
random_state = 42)
X_train, X_Val, Y_train, Y_Val = train_test_split(X_train,Y_train, test_size
= 0.15, random_state = 42)
print(X_train.shape,Y_train.shape)
print(X_test.shape,Y_test.shape)
print(X_Val.shape,Y_Val.shape)
type(Y_test)
embed_size = embed.get_output_info_dict()['default'].get_shape()[1].value
def UniversalEmbedding(x):
return embed(tf.squeeze(tf.cast(x, tf.string)),
signature="default", as_dict=True)["default"]
import keras
seed=7
np.random.seed(seed)
from keras.layers import Input, Dense, concatenate, Activation,
GlobalMaxPooling1D
from keras import layers
from keras.models import Model
input_text = layers.Input(shape=(1,), dtype=tf.string)
embedding = layers.Lambda(UniversalEmbedding,
output_shape=(embed_size,))(input_text)
bigram_branch = Conv1D(filters=64, kernel_size=1, padding='same',
activation='relu', strides=1)(embedding)
bigram_branch = GlobalMaxPooling1D()(bigram_branch)
trigram_branch = Conv1D(filters=64, kernel_size=2, padding='same',
activation='relu', strides=1)(embedding)
trigram_branch = GlobalMaxPooling1D()(trigram_branch)
fourgram_branch = Conv1D(filters=64, kernel_size=3, padding='same',
activation='relu', strides=1)(embedding)
fourgram_branch = GlobalMaxPooling1D()(fourgram_branch)
merged = concatenate([bigram_branch, trigram_branch, fourgram_branch],
axis=1)
merged = Dense(512, activation='relu')(merged)
merged = Dropout(0.8)(merged)
merged = Dense(2)(merged)
output = Activation('sigmoid')(merged)
model = Model(inputs=[tweet_input], outputs=[output])
adam=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None,
decay=0.0, amsgrad=False)
model.compile(loss='mean_squared_error',
optimizer= adam,
metrics=['accuracy'])
model.summary()
You can not directly pass Universal Sentence Encoder to Conv1D because Conv1D expected a tensor with shape [batch, sequence, feature] while the output of Universal Sentence Encoder is [batch, feature]. It is also stated in tfhub.dev:
The input is variable length English text and the output is a 512
dimensional vector.
How can I fix this?
In my view, the easiest possible solution is to use ELMo on Tensorhub. With ELMo you can map each sentence to [batch, sequence, feature] and then feed into the Conv1D.
I am attempting to wrap my keras models in scikit learn GridSearchCV and Pipeline structures for hyperparameter tuning.
It works absolutely fine when the build_fn function takes 0 arguments for use in KerasClassifier. However it fails whenever I use a function which takes arguments.
Example code below
def prepare_classifier(x, y):
shape_of_input = x.shape
shape_of_target = y.shape
classifier = Sequential()
## number of neurons = 30
## kernel_initializer determines how the weights are initialized
## activation is the activation function at this particular hidden layer
## input_shape is the number of features in a single row.. in this case it is shape_of_input[1]
## shape_of_input[0] is the total number of such rows
classifier.add(Dense(units = 30, activation = 'relu', kernel_initializer = 'uniform', input_dim = shape_of_input[1]))
classifier.add(Dense(units = 30, activation = 'relu', kernel_initializer = 'uniform'))
## we are predicting 10 digits for each row of x.
## in total there are shape_of_input[0] rows in total
classifier.add(Dense(10, activation = 'softmax'))
## categorical_crossentropy is the loss function for multi output loss function
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
return classifier
def fit(classifier, x_train, y_train, epoch_size, batch_size = 10):
pipeline = Pipeline([
('keras_classifier', classifier)
])
param_grid = {
'keras_classifier__batch_size' : [10,20,30,50],
'keras_classifier__epochs' : [100, 200, 300],
'keras_classifier__x' : [x_train],
'keras_classifier__y' : [y_train],
}
grid = GridSearchCV(estimator = pipeline, param_grid = param_grid, n_jobs = -1)
grid.fit(x_train, y_train)
print("Best parameters are : ", grid.best_params_, '\n grid best score :', grid.best_score_)
classifier = KerasClassifier(build_fn = prepare_classifier, x = x_train[0:100], y = y_train )
fit(classifier, x_train[:100], y_train, epoch_size )
This is for some x, and some y data (p.s. I have used mnist data)
The error I get is :
RuntimeError: Cannot clone object , as the constructor either does not set or modifies parameter x
However if my prepare_classifier function takes no arguments code works absolutely fine.
What am I doing incorrectly?
solved it. essentially the below line was the issue
classifier = KerasClassifier(build_fn = prepare_classifier, x = x_train[0:100], y = y_train )
needed to be changed to
classifier = KerasClassifier(build_fn = prepare_classifier)
and the parameters for the prepare_classifier needs to be sent using param_grid