Is there anyway to use fit_generator() method with KerasRegressor wrapper? - keras

I am trying to tune hyper-parameters for my LSTM model using GridSearchCV but have used TimeseriesGenerator from keras.preprocessing.sequence. How do I modify the KerasRegressor wrapper to accommodate fit_generator() instead of the fit() method?
def create_model(layer1=50):
lstm_model = Sequential()
lstm_model.add(LSTM(layer1, input_shape=(10,11)))
lstm_model.add(Dense(11, activation='tanh'))
lstm_model.compile(loss='mean_squared_error', optimizer='rmsprop')
return lstm_model
model = KerasRegressor(build_fn=create_model, epochs=5, batch_size=10, verbose=0)
layer1 = [40,50]
param_grid = dict(layer1=layer1)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
grid_result = grid.fit(X, y)
But how do I use KerasRegressor with a generator like
from keras.preprocessing.sequence import TimeseriesGenerator
train_data_gen = TimeseriesGenerator(X, y,length=10,batch_size=100)

Related

GridsearchCV loss doesn't equal model.fit() loss values

I am confused as to which metric GridsearchCV is using in its parameter search. My understanding is that my model object feeds it a metric and this is what is used to determine the "best_params". But this doesn't appear to be the case. I thought that score=None is the default and as a result the first metric given in the metrics option of model.compile() was used. So in my case the the scoring function used should be the mean_squred_error. My explanation for this issue is described next.
Here is what I am doing. I simulated some regression data using sklearn with 10 features on 100,000 observations. I am playing around with keras because I typically used pytorch in the past and never really dabbled with keras until now. I am noticing a discrepancy in the loss function output from my GridsearchCV call vs the model.fit() call after I have my optimal set of parameters. Now I know I can just refit=True and not re-fit the model again, but I am trying to get a feel for the output of the keras and sklearn GridsearchCV functions.
To be explicit about the discrepancy here is what I am seeing. I simulated some data using sklearn as follows:
# Setting some data basics
N = 10000
feats = 10
# generate regression dataset
X, y = make_regression(n_samples=N, n_features=feats, n_informative=2, noise=3)
# training data and testing data #
X_train = X[:int(N * 0.8)]
y_train = y[:int(N * 0.8)]
X_test = X[int(N * 0.8):]
y_test = y[int(N * 0.8):]
I have created a "create_model" function that is looking to tune which activation function I am using (again this is a simple example for a proof of concept).
def create_model(activation_fn):
# create model
model = Sequential()
model.add(Dense(30, input_dim=feats, activation=activation_fn,
kernel_initializer='normal'))
model.add(Dropout(0.2))
model.add(Dense(10, activation=activation_fn))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
# Compile model
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mean_squared_error','mae'])
return model
Performing the grid search I get the following output
model = KerasRegressor(build_fn=create_model, epochs=50, batch_size=200, verbose=0)
activations = ['linear','relu']
param_grid = dict(activation_fn = activations)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=3)
grid_result = grid.fit(X_train, y_train, verbose=1)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
Best: -21.163454 using {'activation_fn': 'linear'}
Ok, so the best metric is the mean squared error of 21.16 (I understand they flip the sign to create a maximization problem). So, when I fit the model using the activation_fn = 'linear' the MSE I get is totally different.
best_model = create_model('linear')
history = best_model.fit(X_train, y_train, epochs=50, batch_size=200, verbose=1)
.....
.....
Epoch 49/50
8000/8000 [==============================] - 0s 48us/step - loss: 344.1636 - mean_squared_error: 344.1636 - mean_absolute_error: 12.2109
Epoch 50/50
8000/8000 [==============================] - 0s 48us/step - loss: 326.4524 - mean_squared_error: 326.4524 - mean_absolute_error: 11.9250
history.history['mean_squared_error']
Out[723]:
[10053.778002929688,
9826.66806640625,
......
......
344.16363830566405,
326.45237121582034]
The difference is in 326.45 vs. 21.16. Any insight as to what I am misunderstanding would be greatly appreciated. I would be more comfortable if they were within a reasonable neighborhood of each other, given one is the error from one fold vs the entire training data set. But 21 is nowhere near 326. Thanks!
The entire code is seen here.
import pandas as pd
import numpy as np
from keras import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
from keras.constraints import maxnorm
from sklearn import preprocessing
from sklearn.preprocessing import scale
from sklearn.datasets import make_regression
from matplotlib import pyplot as plt
# Setting some data basics
N = 10000
feats = 10
# generate regression dataset
X, y = make_regression(n_samples=N, n_features=feats, n_informative=2, noise=3)
# training data and testing data #
X_train = X[:int(N * 0.8)]
y_train = y[:int(N * 0.8)]
X_test = X[int(N * 0.8):]
y_test = y[int(N * 0.8):]
def create_model(activation_fn):
# create model
model = Sequential()
model.add(Dense(30, input_dim=feats, activation=activation_fn,
kernel_initializer='normal'))
model.add(Dropout(0.2))
model.add(Dense(10, activation=activation_fn))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
# Compile model
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mean_squared_error','mae'])
return model
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# create model
model = KerasRegressor(build_fn=create_model, epochs=50, batch_size=200, verbose=0)
# define the grid search parameters
activations = ['linear','relu']
param_grid = dict(activation_fn = activations)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=3)
grid_result = grid.fit(X_train, y_train, verbose=1)
best_model = create_model('linear')
history = best_model.fit(X_train, y_train, epochs=50, batch_size=200, verbose=1)
history.history.keys()
plt.plot(history.history['mean_absolute_error'])
# summarize results
grid_result.cv_results_
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
The large loss reported in your output (326.45237121582034) is the training loss. If you need a metric to be compared with the grid_result.best_score_ (in the GridSearchCV) and the MSE (in the best_model.fit), you have to request the validation loss (cf. code below).
Now to the question: why is the validation loss lower than the training loss? In your case it is essentially because of dropout (which is applied during training but not during validation/test) - that is why the difference between training and validation losses disappears when you remove dropout. You can find a detailed explanation here of the possible reasons for a lower validation loss.
In short, the performance (MSE) of your model is given by the grid_result.best_score_ (21.163454 in your example).
import numpy as np
from keras import Sequential
from keras.layers import Dense, Dropout
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.datasets import make_regression
import tensorflow as tf
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
tf.random.set_seed(42)
# Setting some data basics
N = 10000
feats = 10
# generate regression dataset
X, y = make_regression(n_samples=N, n_features=feats, n_informative=2, noise=3)
# training data and testing data #
X_train = X[:int(N * 0.8)]
y_train = y[:int(N * 0.8)]
X_test = X[int(N * 0.8):]
y_test = y[int(N * 0.8):]
def create_model(activation_fn):
# create model
model = Sequential()
model.add(Dense(30, input_dim=feats, activation=activation_fn,
kernel_initializer='normal'))
model.add(Dropout(0.2))
model.add(Dense(10, activation=activation_fn))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
# Compile model
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mean_squared_error','mae'])
return model
# create model
model = KerasRegressor(build_fn=create_model, epochs=50, batch_size=200, verbose=0)
# define the grid search parameters
activations = ['linear','relu']
param_grid = dict(activation_fn = activations)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1, cv=3)
grid_result = grid.fit(X_train, y_train, verbose=1, validation_data=(X_test, y_test))
best_model = create_model('linear')
history = best_model.fit(X_train, y_train, epochs=50, batch_size=200, verbose=1, validation_data=(X_test, y_test))
history.history.keys()
# plt.plot(history.history['mae'])
# summarize results
print(grid_result.cv_results_)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))

How to use KerasClassifier validation split and using scitkit learn GridSearchCV

I want to try to test some hyperparameters, thats i want to use the GridSearchCV, because it seems like thats the way to do it.
But i also want to use the validation split. To use Callsbacks like EarlyStopping or/and ReduceLROnPlateau. So my question is:
How do i implement GridSearchCV + validation_split correctly that none of the data in validation split is using for training and the whole training set is used to train my model?
Afaik GridSearchCV split again my remaining train data (which is 1-validation_split) and split it again? I get kinda high accuracy and im thinking that i dont split the data correctly
model = KerasClassifier(build_fn=create_model,verbose=2, validation_split=0.1)
optimizers = ['rmsprop', 'adam']
init = ['glorot_uniform',
#'normal',
'uniform',
'he_normal',
#'lecun_normal',
#'he_uniform'
]
epochs = [3] #5,8,10,30
batches = [64] #32,64
param_grid = dict(optimizer=optimizers, epochs=epochs, batch_size=batches, init=init)
grid = GridSearchCV(estimator=model, param_grid=param_grid)
grid_result = grid.fit(X_train, Y_train)
You can use your self-defined validation data by passing an extra argument to the grid.fit() function that is validation_data=(X_test, Y_test). The documentation, states that grid.fit() function accepts all valid arguments that can be passed to the actual model.fit() function of the default Keras model. Therefore, you can pass the validation data through the grid.fit() function. You may also pass the callback functions there.
I am adding a working code below (applied on MNIST-digit dataset). Notice how I added the validation data on grid.fit() and removed the 'validation_split':
import tensorflow as tf
import numpy as np
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
Y_train = to_categorical(Y_train, 10)
Y_test = to_categorical(Y_test, 10)
X_train = np.expand_dims(X_train, 3)
X_test = np.expand_dims(X_test, 3)
def create_model(optimizer, init):
model = tf.keras.Sequential([
tf.keras.layers.Convolution2D(32, 3, input_shape=(28, 28, 1),
activation='relu', kernel_initializer=init),
tf.keras.layers.Convolution2D(32, 3, activation='relu',
kernel_initializer=init),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(12, activation='relu',
kernel_initializer=init),
tf.keras.layers.Dense(10, activation='softmax',
kernel_initializer=init),
])
model.compile(loss='categorical_crossentropy',
optimizer=optimizer, metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, verbose=2,)
optimizers = ['rmsprop', 'adam']
init = ['glorot_uniform',
#'normal',
'uniform',
'he_normal',
#'lecun_normal',
#'he_uniform'
]
epochs = [4,]
batches = [32, 64]
param_grid = dict(optimizer=optimizers, nb_epoch=epochs,
batch_size=batches, init=init)
grid = GridSearchCV(estimator=model, param_grid=param_grid)
grid_result = grid.fit(X_train, Y_train, validation_data=(X_test, Y_test))
Hope this helps. Thanks.

Keras NN model isn't compatible with sklearn's votingClassifier

I'm trying to ensemble my models and use a votingclassifier from sklearn to get an accuracy score. Right now, my keras model (NN) doesn't fit in the ensemble fitting. Here's my code:
I've tried using skLearn's NN, kerasClassifier. Basically, i've ran out of options.
def multiLayerPerceptionModel(nb_epochs, hidden_1, hidden_2,
learn_rate, batch_size, num_input, num_classes, path_log, X_Train,
X_Test, Y_Train, Y_Test):
model_RN = keras.Sequential([
keras.layers.Dense(hidden_1, activation=tf.nn.relu),
keras.layers.Dense(hidden_2, activation=tf.nn.relu),
keras.layers.Dense(num_classes, activation='softmax'),
BatchNormalization(),
])
model_RN.add(layers.Dense(num_classes, activation='softmax'))
model_RN.compile(optimizer=SGD(learn_rate),loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model_RN.fit(X_Train, Y_Train, validation_data
(X_Test,Y_Test),epochs=nb_epochs, batch_size=batch_size, callbacks=
[tensorboard])
model_RN.add(Flatten())
TypeError: Cannot clone object
'<tensorflow.python.keras.engine.sequential.Sequential object at
0x11778d400>' (type <class
'tensorflow.python.keras.engine.sequential.Sequential'>): it does
not seem to be a scikit-learn estimator as it does not implement a
'get_params' methods.

How to use tensorboard with tensorflow 2.0?

The following seems to be the new way of creating a FileWriter but I am not sure how to add_graph or do other things to get the model graph showing up in tensorboard.
train_writer = tf.summary.create_file_writer('logs/1/train')
You can do it by using tf.keras.callback.TensorBoard(/path/to/log/dir) in following way.
Create a build model function
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset[col_to_norm].keys())]),
layers.Dense(64, activation="relu"),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
2 . assign to a variable
model = build_model()
Fit the model using fit method
model.fit(
normalized_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[tf.keras.callbacks.TensorBoard('logs/1/train')])

How can I get the score I wanted by using KerasRegressor and sklearn pipeline?

I want to insert Keras model into scikit-learn pipeline, but when I use pipeline.score, I am comfused. Here is the code:
from keras import models
from keras import layers
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
def build_model():
model = models.Sequential()
model.add(
layers.Dense(
64, activation='relu', input_shape=(train_data.shape[1], )))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
return model
model = KerasRegressor(
build_fn=build_model, epochs=90, batch_size=1, verbose=0)
pipe_network = Pipeline([('scl', StandardScaler()), ('clf', model)])
pipe_network.fit(train_data, train_targets)
The model score is:
pipe_network.score(test_data, test_targets)
>>> -12.813292971994802
What's the score is? I want to get the result like the output of evaluate function, How can I do?
stdsc = StandardScaler()
train_data_std = stdsc.fit_transform(train_data)
test_data_std = stdsc.transform(test_data)
network = build_model()
network.fit(train_data_std, train_targets, epochs=90, batch_size=1, verbose=0)
network.evaluate(test_data_std, test_targets)
>>> [12.681396334779029, 2.479423579047708]
Thank you for your attention.

Resources