I am trying to train a Deep Neural Network using MNIST data set.
BATCH_SIZE = 100
train_data = train_data.batch(BATCH_SIZE)
validation_data = validation_data.batch(num_validation_samples)
test_data = scaled_test_data.batch(num_test_samples)
validation_inputs, validation_targets = next(iter(validation_data))
input_size = 784
output_size = 10
hidden_layer_size = 50
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28,1)),
tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
tf.keras.layers.Dense(output_size, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
NUM_EPOCHS = 5
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs,validation_targets))
The model.fit is throwing the following error
-------------------------------------------------------------------------
--
ValueError Traceback (most recent call last)
<ipython-input-58-c083185dafc6> in <module>
1 NUM_EPOCHS = 5
----> 2 model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs,validation_targets))
~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
726 max_queue_size=max_queue_size,
727 workers=workers,
--> 728 use_multiprocessing=use_multiprocessing)
729
730 def evaluate(self,
~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
222 validation_data=validation_data,
223 validation_steps=validation_steps,
--> 224 distribution_strategy=strategy)
225
226 total_samples = _get_total_number_of_samples(training_data_adapter)
~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
562 class_weights=class_weights,
563 steps=validation_steps,
--> 564 distribution_strategy=distribution_strategy)
565 elif validation_steps:
566 raise ValueError('`validation_steps` should not be specified if '
~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
604 max_queue_size=max_queue_size,
605 workers=workers,
--> 606 use_multiprocessing=use_multiprocessing)
607 # As a fallback for the data type that does not work with
608 # _standardize_user_data, use the _prepare_model_with_inputs.
~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, batch_size, epochs, steps, shuffle, **kwargs)
252 if not batch_size:
253 raise ValueError(
--> 254 "`batch_size` or `steps` is required for `Tensor` or `NumPy`"
255 " input data.")
256
ValueError: `batch_size` or `steps` is required for `Tensor` or `NumPy` input data.
The training and validation data are obtained from MNIST dataset. Some part of the data are taken as training data and some as testing data.
What am I doing wrong here?
Update
As per Dominques suggestion, I have changed model.fit to
model.fit(train_data, batch_size=128, epochs=NUM_EPOCHS, validation_data=(validation_inputs,validation_targets))
But now, I get the following error
ValueError: The `batch_size` argument must not be specified for the given input type. Received input: <BatchDataset shapes: ((None, 28, 28, 1), (None,)), types: (tf.float32, tf.int64)>, batch_size: 128
The tf doc will give you more clues why you get the error.
https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
validation_data: Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. validation_data will override validation_split. validation_data could be:
• tuple (x_val, y_val) of Numpy arrays or tensors
• tuple (x_val, y_val, val_sample_weights) of Numpy arrays
• dataset
For the first two cases, batch_size must be provided. For the last case, validation_steps must be provided.
Since You already have the validation dataset batched, consider to use it directly and specify validation steps as below.
BATCH_SIZE = 100
train_data = train_data.batch(BATCH_SIZE)
validation_data = validation_data.batch(BATCH_SIZE)
...
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=validation_data,validation_steps=1)
You need to specify the batch size, i.e. how many data points should be included in each iteration. If you look at the documentation you will see that there is no default value set.
https://www.tensorflow.org/api_docs/python/tf/keras/Sequential
you can set the value by adding batch_size to the fit command. Good values are normally numbers along the line of 2**n, as this allows for more efficient processing with multiple cores. For you this shouldn't make a strong difference though :)
model.fit(train_data,
batch_size=128
epochs=NUM_EPOCHS,
validation_data=(validation_inputs,validation_targets))
Why nobody mention i don't know but your problem is Y_train data. You don't supply it as an argument to your model..
model.fit(X_Train, y_train, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)
Instead of y_train you are giving :
model.fit(train_data, batch_size=128 ....
And getting an Error saying :
ValueError: `batch_size` or `steps` is required for `Tensor` or `NumPy` input data.
I hope it helps.
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs, validation_targets), verbose=2)
change to (by adding validation_steps=1) will do the trick
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs, validation_targets),validation_steps=1, verbose=2)
I changed the input_shape=(28,28,1) to input_shape=(28,28,3) and it worked for me.
Related
I use word2vec and biLstm to implement sentiment analysis for movie reviews. When I train my model on Jupyter notebook, I always get the TypeError: update() got an unexpected keyword argument 'force' at the last batch of the first epoch.
here is my code:
batch_size = 50
result = model.fit(
X_train,
Y_train,
validation_data=(X_test, Y_test),
batch_size=batch_size,
epochs=5
)
and the error:
Train on 1600 samples, validate on 400 samples
Epoch 1/5
1550/1600 [============================>.] - ETA: 2s - loss: 0.7257 - acc: 0.4961
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-360-8eaf5ac2ad33> in <module>()
17 validation_data=(X_test, Y_test),
18 batch_size=batch_size,
---> 19 epochs=5
20 )
~/opt/anaconda3/envs/WDPS/lib/python3.6/site-packages/keras/models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
958 initial_epoch=initial_epoch,
959 steps_per_epoch=steps_per_epoch,
--> 960 validation_steps=validation_steps)
961
962 def evaluate(self, x, y, batch_size=32, verbose=1,
~/opt/anaconda3/envs/WDPS/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1648 initial_epoch=initial_epoch,
1649 steps_per_epoch=steps_per_epoch,
-> 1650 validation_steps=validation_steps)
1651
1652 def evaluate(self, x=None, y=None,
~/opt/anaconda3/envs/WDPS/lib/python3.6/site-packages/keras/engine/training.py in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch, steps_per_epoch, validation_steps)
1231 for l, o in zip(out_labels, val_outs):
1232 epoch_logs['val_' + l] = o
-> 1233 callbacks.on_epoch_end(epoch, epoch_logs)
1234 if callback_model.stop_training:
1235 break
~/opt/anaconda3/envs/WDPS/lib/python3.6/site-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
71 logs = logs or {}
72 for callback in self.callbacks:
---> 73 callback.on_epoch_end(epoch, logs)
74
75 def on_batch_begin(self, batch, logs=None):
~/opt/anaconda3/envs/WDPS/lib/python3.6/site-packages/keras/callbacks.py in on_epoch_end(self, epoch, logs)
304 self.log_values.append((k, logs[k]))
305 if self.verbose:
--> 306 self.progbar.update(self.seen, self.log_values, force=True)
307
308
TypeError: update() got an unexpected keyword argument 'force'
At first, I have a line of code verbose = 1 under the epochs=5. There will be the same error and the arrow point to verbose = 1. Then I change it to verbose=2 or just delete verbose. But I still have the problem.
I tried to change the batch size and the amount of training set. but it still didn't work out.
It always showed at the last batch.
python version= 3.6.2
keras version = 2.1.1
I'm trying to make own attention model and I found example code in here:
https://www.kaggle.com/takuok/bidirectional-lstm-and-attention-lb-0-043
and it works just fine when I run it without modification.
But my own data contain only numeric values, I had to change example code.
so I erase embedding part in example code and plus, this is what I fixed.
xtr = np.reshape(xtr, (xtr.shape[0], 1, xtr.shape[1]))
# xtr.shape() = (n_sample_train, 1, 150), y.shape() = (n_sample_train, 6)
xte = np.reshape(xte, (xte.shape[0], 1, xte.shape[1]))
# xtr.shape() = (n_sample_test, 1, 150)
model = BidLstm(maxlen, max_features)
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
and my BidLstm func looks like,
def BidLstm(maxlen, max_features):
inp = Input(shape=(1,150))
#x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp) -> I don't need embedding since my own data is numeric.
x = Bidirectional(LSTM(300, return_sequences=True, dropout=0.25,
recurrent_dropout=0.25))(inp)
x = Attention(maxlen)(x)
x = Dense(256, activation="relu")(x)
x = Dropout(0.25)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
return model
and it said,
InvalidArgumentErrorTraceback (most recent call last)
<ipython-input-62-929955370368> in <module>
29
30 early = EarlyStopping(monitor="val_loss", mode="min", patience=1)
---> 31 model.fit(xtr, y, batch_size=128, epochs=15, validation_split=0.1, callbacks=[early])
32 #model.fit(xtr, y, batch_size=256, epochs=1, validation_split=0.1)
33
/usr/local/lib/python3.5/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1037 initial_epoch=initial_epoch,
1038 steps_per_epoch=steps_per_epoch,
-> 1039 validation_steps=validation_steps)
1040
1041 def evaluate(self, x=None, y=None,
/usr/local/lib/python3.5/dist-packages/keras/engine/training_arrays.py in fit_loop(model, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch, steps_per_epoch, validation_steps)
197 ins_batch[i] = ins_batch[i].toarray()
198
--> 199 outs = f(ins_batch)
200 outs = to_list(outs)
201 for l, o in zip(out_labels, outs):
/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
2713 return self._legacy_call(inputs)
2714
-> 2715 return self._call(inputs)
2716 else:
2717 if py_any(is_tensor(x) for x in inputs):
/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py in _call(self, inputs)
2673 fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata)
2674 else:
-> 2675 fetched = self._callable_fn(*array_vals)
2676 return fetched[:len(self.outputs)]
2677
/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
1437 ret = tf_session.TF_SessionRunCallable(
1438 self._session._session, self._handle, args, status,
-> 1439 run_metadata_ptr)
1440 if run_metadata:
1441 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
526 None, None,
527 compat.as_text(c_api.TF_Message(self.status.status)),
--> 528 c_api.TF_GetCode(self.status.status))
529 # Delete the underlying status object from memory otherwise it stays alive
530 # as there is a reference to status from this from the traceback due to
InvalidArgumentError: Input to reshape is a tensor with 128 values, but the requested shape requires a multiple of 150
[[{{node attention_16/Reshape_2}}]]
[[{{node loss_5/mul}}]]
I think something wrong in loss function saids in here:
Input to reshape is a tensor with 2 * "batch_size" values, but the requested shape has "batch_size"
but I don't know which part to fix it.
my keras and tensorflow versions are 2.2.4 and 1.13.0-rc0
please help. thanks.
Edit 1
I've change my batch size, like keras saids, multiple of 150(batch_size = 150). than it reports
Train on 143613 samples, validate on 15958 samples
Epoch 1/15
143400/143613 [============================>.] - ETA: 0s - loss: 0.1505 - acc: 0.9619
InvalidArgumentError: Input to reshape is a tensor with 63 values, but the requested shape requires a multiple of 150
[[{{node attention_18/Reshape_2}}]]
[[{{node metrics_6/acc/Mean_1}}]]
and details is same as before. what should I do?
Your input shape must be (150,1).
LSTM shapes are (batch, steps, features). It's pointless to use LSTMs with 1 step only. (Unless you are using custom training loops with stateful=True, which is not your case).
I want to save the best checkpoint when my model is training, but the callback does not work as I expect. According to Saving best model in Keras this code should work.
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=8, input_shape=(X_train.shape[1], 4)))
model.add(MaxPooling1D(pool_size=4))
model.add(Flatten())
model.add(Dense(16, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.summary()
stop = EarlyStopping(monitor='val_loss', patience=15, verbose=1, mode='min')
save = ModelCheckpoint('./my_model.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=1, epsilon=1e-4, mode='min')
history = model.fit(X_train, y_train, epochs=25, verbose=0, callbacks=[stop, save, reduce_lr], validation_split=0.25)
However it keeps giving me following error:
AttributeError Traceback (most recent call last)
<ipython-input-28-f86f439eae5a> in <module>()
17 reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')
18
---> 19 history = model.fit(X_train, y_train, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)
20
21
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, max_queue_size, workers, use_multiprocessing, **kwargs)
878 initial_epoch=initial_epoch,
879 steps_per_epoch=steps_per_epoch,
--> 880 validation_steps=validation_steps)
881
882 def evaluate(self,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, mode, validation_in_fit, **kwargs)
323 # Callbacks batch_begin.
324 batch_logs = {'batch': batch_index, 'size': len(batch_ids)}
--> 325 callbacks._call_batch_hook(mode, 'begin', batch_index, batch_logs)
326 progbar.on_batch_begin(batch_index, batch_logs)
327
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
194 t_before_callbacks = time.time()
195 for callback in self.callbacks:
--> 196 batch_hook = getattr(callback, hook_name)
197 batch_hook(batch, logs)
198 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
AttributeError: 'EarlyStopping' object has no attribute 'on_train_batch_begin'
I have successfully used this code for my functional model, but I am not sure what the problem is here with the sequential model.
From the stack trace, I notice that you're using tensorflow.keras but EarlyStopping from keras (based on the the other answer you referenced). This is the cause of the error.
This should work(import from tensorflow keras):
from tensorflow.keras.callbacks import EarlyStopping
If you want to use all of Keras's functionality you can't use Tensorflow 2.0. Keras integration is incomplete.
pip install --upgrade "tensorflow==1.4" "keras>=2.0"
I have a Dask DataFrame that I want to use for fitting a Keras autoencoder model:
DataFrame:
import dask.dataframe as dd
input_df = dd.read_csv(file_path)
input_df.dtypes
_2 float64
_3 float64
_4 float64
_5 float64 ...
Keras model:
autoencoder = Sequential()
autoencoder.add(Dense(dense[0], input_shape=(dense[0],), activation = 'relu' ))
autoencoder.add(Dense(dense[1], activation = 'relu' ))
autoencoder.add(Dense(dense[2], activation = 'relu' ))
autoencoder.add(Dense(dense[3], activation = 'relu' ))
autoencoder.add(Dense(dense[0], activation = 'relu' ))
autoencoder.compile(loss='mse',
optimizer='adam',
metrics=['mse'])
When I pass the DataFrame for fitting:
autoencoder.fit(input_df, input_df,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_split = val_split)
I get the error:
TypeError Traceback (most recent call last)
<ipython-input-23-d0480d8a460d> in <module>()
3 epochs=epochs,
4 verbose=1,
----> 5 validation_split = val_split)
~/anaconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
950 sample_weight=sample_weight,
951 class_weight=class_weight,
--> 952 batch_size=batch_size)
953 # Prepare validation data.
954 do_validation = False
~/anaconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
799 for (ref, sw, cw, mode) in
800 zip(y, sample_weights, class_weights,
--> 801 feed_sample_weight_modes)
802 ]
803 # Check that all arrays have the same length.
~/anaconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py in <listcomp>(.0)
797 sample_weights = [
798 standardize_weights(ref, sw, cw, mode)
--> 799 for (ref, sw, cw, mode) in
800 zip(y, sample_weights, class_weights,
801 feed_sample_weight_modes)
~/anaconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training_utils.py in standardize_weights(y, sample_weight, class_weight, sample_weight_mode)
522 else:
523 if sample_weight_mode is None:
--> 524 return np.ones((y.shape[0],), dtype=K.floatx())
525 else:
526 return np.ones((y.shape[0], y.shape[1]), dtype=K.floatx())
~/anaconda3/envs/py36/lib/python3.6/site-packages/numpy/core/numeric.py in ones(shape, dtype, order)
201
202 """
--> 203 a = empty(shape, dtype, order)
204 multiarray.copyto(a, 1, casting='unsafe')
205 return a
TypeError: 'float' object cannot be interpreted as an integer
Would appreciate some help! Thanks!
I am using tensorflow==1.2.1 and Keras==2.0.6 to build a model:
input_num = X_norm_keras[:,2:].shape[1]
model_keras = Sequential()
model_keras.add(Dense(10, input_dim=input_num, activation='relu'))
model_keras.add(Dense(1, activation='linear'))
kernel_regularizer=regularizers.l2(0.2), optimizer='adam')
model_keras.compile(loss='mean_squared_error', kernel_regularizer=regularizers.l2(0.2), optimizer='adam')
model_keras.fit(X_norm_train[:,2:], y_norm_train, batch_size=25, epochs=250)
But got the following errors:
Using TensorFlow backend.
total data points = (25, 106)
Epoch 1/250
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-4cbd897903e7> in <module>()
102 model_keras.compile(loss='mean_squared_error', kernel_regularizer=regularizers.l2(0.2), optimizer='adam')
--> 103 model_keras.fit(X_norm_train[:,2:], y_norm_train, batch_size=25, epochs=250)
/usr/local/lib/python3.4/dist-packages/keras/models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
861 class_weight=class_weight,
862 sample_weight=sample_weight,
--> 863 initial_epoch=initial_epoch)
864
865 def evaluate(self, x, y, batch_size=32, verbose=1,
/usr/local/lib/python3.4/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
1428 val_f=val_f, val_ins=val_ins, shuffle=shuffle,
1429 callback_metrics=callback_metrics,
-> 1430 initial_epoch=initial_epoch)
1431
1432 def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None):
/usr/local/lib/python3.4/dist-packages/keras/engine/training.py in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch)
1077 batch_logs['size'] = len(batch_ids)
1078 callbacks.on_batch_begin(batch_index, batch_logs)
-> 1079 outs = f(ins_batch)
1080 if not isinstance(outs, list):
1081 outs = [outs]
/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
2266 updated = session.run(self.outputs + [self.updates_op],
2267 feed_dict=feed_dict,
-> 2268 **self.session_kwargs)
2269 return updated[:len(self.outputs)]
2270
TypeError: run() got an unexpected keyword argument 'kernel_regularizer'
Am I missing anything here? Thanks!
The regularizer kernel_regularizer=regularizers.l2(0.2) should be an argument of Dense(), not model.compile().
From the documentation of model.compile():
**kwargs: When using the Theano/CNTK backends, these arguments are passed into K.function. When using the TensorFlow backend, these arguments are passed into tf.Session.run.
That's why you are seeing an error coming from run().