Keras NN model isn't compatible with sklearn's votingClassifier - python-3.x

I'm trying to ensemble my models and use a votingclassifier from sklearn to get an accuracy score. Right now, my keras model (NN) doesn't fit in the ensemble fitting. Here's my code:
I've tried using skLearn's NN, kerasClassifier. Basically, i've ran out of options.
def multiLayerPerceptionModel(nb_epochs, hidden_1, hidden_2,
learn_rate, batch_size, num_input, num_classes, path_log, X_Train,
X_Test, Y_Train, Y_Test):
model_RN = keras.Sequential([
keras.layers.Dense(hidden_1, activation=tf.nn.relu),
keras.layers.Dense(hidden_2, activation=tf.nn.relu),
keras.layers.Dense(num_classes, activation='softmax'),
BatchNormalization(),
])
model_RN.add(layers.Dense(num_classes, activation='softmax'))
model_RN.compile(optimizer=SGD(learn_rate),loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model_RN.fit(X_Train, Y_Train, validation_data
(X_Test,Y_Test),epochs=nb_epochs, batch_size=batch_size, callbacks=
[tensorboard])
model_RN.add(Flatten())
TypeError: Cannot clone object
'<tensorflow.python.keras.engine.sequential.Sequential object at
0x11778d400>' (type <class
'tensorflow.python.keras.engine.sequential.Sequential'>): it does
not seem to be a scikit-learn estimator as it does not implement a
'get_params' methods.

Related

Is there anyway to use fit_generator() method with KerasRegressor wrapper?

I am trying to tune hyper-parameters for my LSTM model using GridSearchCV but have used TimeseriesGenerator from keras.preprocessing.sequence. How do I modify the KerasRegressor wrapper to accommodate fit_generator() instead of the fit() method?
def create_model(layer1=50):
lstm_model = Sequential()
lstm_model.add(LSTM(layer1, input_shape=(10,11)))
lstm_model.add(Dense(11, activation='tanh'))
lstm_model.compile(loss='mean_squared_error', optimizer='rmsprop')
return lstm_model
model = KerasRegressor(build_fn=create_model, epochs=5, batch_size=10, verbose=0)
layer1 = [40,50]
param_grid = dict(layer1=layer1)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
grid_result = grid.fit(X, y)
But how do I use KerasRegressor with a generator like
from keras.preprocessing.sequence import TimeseriesGenerator
train_data_gen = TimeseriesGenerator(X, y,length=10,batch_size=100)

Get confusion matrix from a Keras model

I have the following NN model using Keras:
import numpy as np
from keras import Sequential
from keras.layers import Dense
path = 'pima-indians-diabetes.data.csv'
dataset = np.loadtxt(path, delimiter=",")
X = dataset[:,0:8]
Y = dataset[:,8]
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
model = Sequential()
model.add(Dense(16, input_dim=8, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=100, batch_size=16, validation_data=(X_test, y_test))
Kindly, is it possible to extract the confusion matrix? How?
You can use scikit-learn:
y_pred = model.predict(X_test)
confusion_matrix = sklearn.metrics.confusion_matrix(y_test, np.rint(y_pred))
It can be done using TensorFlow (which is almost Keras =)).
You start by making predictions on your test set with your trained model:
predictions = model.predict(x_test)
Then you can import TensorFlow and use its confusion_matrix method as follows.
import tensorflow as tf
conf_matrix = tf.math.confusion_matrix(labels=y_test,
predictions=predictions)
More information in the TensorFlow documentation.

How to include normalization of features in Keras regression model?

I have a data for a regression task.
The independent features(X_train) are scaled with a standard scaler.
Built a Keras sequential model adding hidden layers. Compiled the model.
Then fitting the model with model.fit(X_train_scaled, y_train )
Then I saved the model in a .hdf5 file.
Now how to include the scaling part inside the saved model,
so that the same scaling parameters can be applied to unseen test data.
#imported all the libraries for training and evaluating the model
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=42)
sc = StandardScaler()
X_train_scaled = sc.fit_transform(X_train)
X_test_scaled= sc.transform (X_test)
def build_model():
model = keras.Sequential([layers.Dense(64, activation=tf.nn.relu,input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation=tf.nn.relu),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
model = build_model()
EPOCHS=1000
history = model.fit(X_train_scaled, y_train, epochs=EPOCHS,
validation_split = 0.2, verbose=0)
loss, mae, mse = model.evaluate(X_test_scaled, y_test, verbose=0)

MLP classifier_for multi class

I am newbie on keras,
I try to follow the Keras tutorial for Multilayer Perceptron (MLP) for multi-class softmax classification, using my data set.
My data has 3 classes and only one feature, but I don't understand why the result always show just 0,3 of accuracy and the model predicted all training data as first class. then the confusion matrix is like this.
Confusion matrix
Here the coding:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
import pandas as pd
import numpy as np
# Importing the dataset
dataset = pd.read_csv('StatusAll.csv')
X = dataset.iloc[:, 1:].values
y = dataset.iloc[:, 0:1].values
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
from keras.utils import to_categorical
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(64, activation='tanh', input_dim=1))
model.add(Dropout(0.5))
model.add(Dense(64, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
history = model.fit(x_train, y_train,
epochs=100,
batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)
print('Test score:', score[0])
print('Test accuracy:', score[1])
from sklearn import metrics
prediction = model.predict(x_test)
prediction = np.around(prediction)
y_test_non_category = [ np.argmax(t) for t in y_test ]
y_predict_non_category = [ np.argmax(t) for t in prediction ]
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test_non_category, y_predict_non_category)
print (conf_mat)
I hope I can get some advice, thanksss.
The x_train example
x_train
y_train before converted to categorical
enter image description here
Your final Dense layer has 4 outputs, it seems like you are classifying 4 instead of 3.
model.add(Dense(3, activation='softmax')) # Number of classes 3
It would be helpful to see sample data from x_train and y_train to make sure the pre-processing is correct. Because you have only 1 feature, a MLP might be overkill. A decision tree would be simpler unless you want to experiment with MLPs.

From SKLearn to Keras - What is the difference?

I'm trying to go from SKLearn to Keras in order to make specific improvements to my models.
However, I can't get the same performance I had with my SKLearn model :
mlp = MLPClassifier(
solver='adam', activation='relu',
beta_1=0.9, beta_2=0.999, learning_rate='constant',
alpha=0, hidden_layer_sizes=(238,),
max_iter=300
)
dev_score(mlp)
Gives ~0.65 score everytime
Here is my corresponding Keras code :
def build_model(alpha):
level_moreargs = {'kernel_regularizer':l2(alpha), 'kernel_initializer': 'glorot_uniform'}
model = Sequential()
model.add(Dense(units=238, input_dim=X.shape[1], **level_moreargs))
model.add(Activation('relu'))
model.add(Dense(units=class_names.shape[0], **level_moreargs)) # output
model.add(Activation('softmax'))
model.compile(loss=keras.losses.categorical_crossentropy, # like sklearn
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0),
metrics=['accuracy'])
return model
k_dnn = KerasClassifier(build_fn=build_model, epochs=300, batch_size=200, validation_data=None, shuffle=True, alpha=0.5, verbose=0)
dev_score(k_dnn)
From looking at the documentation (and digging into SKLearn code), this should correspond exactly to the same thing.
However, I get ~0.5 accuracy when I run this model, which is very bad.
And if I set alpha to 0, SKLearn's score barely changes (0.63), while Keras's goes random from 0.2 to 0.4.
What is the difference between these models ? Why is Keras, although being supposed to be better than SKLearn, outperformed by so far here ? What's my mistake ?
Thanks,

Resources