I am trying to build a basic NN using Keras using multiple input layers, some of which are connected to Embedding layers. I am using the data for Kaggle's Housing Prices competition.
Here is the structure of the model I am using:
I use a scikit-learn Pipeline to transform the data. For the embedding layers, I transform the data using label encoding. The rest of the code I am using to build the model:
##################### CREATE INPUT AND EMBEDDING LAYERS FOR CATEGORICAL FEATURES #####################
# Input and embedding layer for MSSubClass feature
no_of_unique_categories = X_train['MSSubClass'].nunique() + 1
embedding_size = int(max(min(np.ceil((no_of_unique_categories)/2), 50),2))
input_cat_layer_1 = Input(shape=(1,))
embedding_layer_1 = Embedding(
input_dim=no_of_unique_categories,
output_dim=embedding_size,
input_length=1)(input_cat_layer_1)
flattened_layer_1 = Flatten()(embedding_layer_1)
no_of_unique_categories = X_train['MSZoning'].nunique() + 1
embedding_size = int(max(min(np.ceil((no_of_unique_categories)/2), 50),2))
# Input and embedding layer for MSZoning feature
input_cat_layer_2 = Input(shape=(1,))
embedding_layer_2 = Embedding(
input_dim=no_of_unique_categories,
output_dim=embedding_size,
input_length=1)(input_cat_layer_2)
flattened_layer_2 = Flatten()(embedding_layer_2)
# Input layer for all the numerical features
no_of_numeric_features = len(numeric_columns)
numeric_input = Input(shape=(no_of_numeric_features,))
##################### BUILD THE MODEL #####################
concatenate = Concatenate()([numeric_input, flattened_layer_1, flattened_layer_2])
hidden = Dense(110, activation='relu')(concatenate)
output = Dense(1)(hidden)
model = Model(inputs=[numeric_input, input_cat_layer_1, input_cat_layer_2], outputs=output, name='final_model')
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True, to_file='model_structure.png')
optimizer = Adam(learning_rate=0.001)
model.compile(loss='mse', optimizer=optimizer, metrics=['RootMeanSquaredError'])
##################### TRANSFORM DATA TO USE IN MODEL #####################
X_train_transformed = preprocessor.fit_transform(X_train)
X_test_transformed = preprocessor.transform(X_test)
model.fit(X_train_transformed, y_train, epochs=200, validation_data=(X_test_transformed, y_test))
mse_test = model.evaluate(X_test_transformed, y_test)
predictions = model.predict(X_test_transformed)
mse = mean_squared_error(y_test, predictions)
rmse = np.sqrt(mse)
The problem I am running into is when I try and fit the model to the training data, I receive the following error:
ValueError: Layer "final_model" expects 3 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 37) dtype=float32>]
I cannot figure out why this is happening and what I can do to fix it.
Related
My concern stems from how I currently have a preprocessing layer required for altering the input to the transfer learning base model, but I want to add several other preprocessing layers for data augmentation.
I want the base model preprocessing layer to apply to test data during predict() calls, but don't want the layers added for data augmentation to do so. According to the Keras documentation, "Data augmentation is inactive at test time so input images will only be augmented during calls to Model.fit (not Model.evaluate or Model.predict).", but is that the same for a base model preprocessing layer too?
Model Construction:
# layer construction
input_layer = layers.Input(shape=(targetWidth, targetHeight, 3))
preprocc_input = applications.inception_v3.preprocess_input
base_model = applications.InceptionV3(
input_shape=(targetWidth, targetHeight, 3),
include_top=False,
)
base_model.trainable = False
global_avg_pool_layer = layers.GlobalAveragePooling2D()
fc_layer = layers.Dense(1024, activation='relu')
#dropout_layer = layers.Dropout(0.2)
output_layer = layers.Dense(
units=1,
activation=keras.activations.sigmoid #'softmax'
)
#layer connecting
x = preprocc_input(input_layer)
x = base_model(x, training=False)
x = global_avg_pool_layer(x)
x = fc_layer(x)
#x = dropout_layer(x)
predictions = output_layer(x)
model = keras.Model(input_layer, predictions)
I'm attempting to find variable importance on a Neural Network I've built. Using tensorflow, it seems you can use either the tensorflow.keras way, or the kerasRegressor way. Admittedly, I have been reading documentation / stack overflow for hours and am confused on the differences. They seem to perform similarly but have slightly different pros/cons.
One issue I'm running into is when I use tf.keras to build the model, I am able to clearly compare my training data to my validation/testing data, and get an 'accuracy score'. But, when using kerasRegressor, I am not.
The difference here is the .evaluate() function, which kerasRegressor doesn't seem to have.
Questions:
How to evaluate performance of kerasRegressor model w/ same output as tf.keras.evaluate()?
kerasRegressor Code:
K.clear_session()
def base_model():
# 1- Instantiate Model
modelNEW = keras.Sequential()
# 2- Specify Shape of First Layer
modelNEW.add(layers.Dense(512, activation = 'relu', input_shape = ourInputShape))
# 3- Add the layers
modelNEW.add(layers.Dense(3, activation= 'softmax')) #softmax returns array of probability scores (num prior), and in this case we have to predict either CSCANCEL, MEMBERCANCEL, ACTIVE)
modelNEW.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
return modelNEW
# *** THIS IS SUPPOSED TO PREVENT OVERFITTING ***
from tensorflow.keras.callbacks import EarlyStopping
callbacks = [
EarlyStopping(patience=2)
]
yTrain = keras.utils.to_categorical(yTrain, 3)
yValidation = keras.utils.to_categorical(yValidation, 3)
currentModel = KerasRegressor(build_fn=base_model, epochs=100, batch_size=50, shuffle='True')
history = currentModel.fit(xTrain, yTrain)
Now if I want to test the accuracy, I have to use .predict()
prediction = currentModel.predict(xValidation)
# print(prediction)
# train_error = np.abs(yValidation - prediction)
# mean_error = np.mean(train_error)
# min_error = np.min(train_error)
# max_error = np.max(train_error)
# std_error = np.std(train_error)```
tf.Keras neural Network:
modelNEW = keras.Sequential()
modelNEW.add(layers.Dense(512, activation = 'relu', input_shape = ourInputShape))
modelNEW.add(layers.Dense(3, activation= 'softmax')) #softmax returns array of probability scores (num prior), and in this case we have to predict either CSCANCEL, MEMBERCANCEL, ACTIVE)
modelNEW.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
*** THIS IS SUPPOSED TO PREVENT OVERFITTING ***
from tensorflow.keras.callbacks import EarlyStopping
callbacks = [
EarlyStopping(patience=2)
]
yTrain = keras.utils.to_categorical(yTrain, 3)
yValidation = keras.utils.to_categorical(yValidation, 3)
history = modelNEW.fit(xTrain, yTrain, epochs=100, batch_size=50, shuffle="True")
This is the evaluation I need to see, and cannot with kerasRegressor:
# 6- Model evaluation with test data
test_loss, test_acc = modelNEW.evaluate(xValidation, yValidation)
print('test_acc:', test_acc)
Possible Workaround, still error:
# predictionTrain = currentModel.predict(xTrain)
predictionValidation = currentModel.predict(xValidation)
# print('Train Accuracy = ',accuracy_score(yTrain,np.argmax(pred_train, axis=1)))
print('Test Accuracy = ',accuracy_score(yValidation,np.argmax(predictionValidation, axis=1)))
: Classification metrics can't handle a mix of multilabel-indicator and binary targets
How to use Keras Applications for a multi-output model?
I am trying to use VGG16 for a multi-output model but get an value error ValueError: Input 0 of layer block1_conv1 is incompatible with the layer: expected axis -1 of input shape to have value 3 but received input with shape (None, 64, 64, 1). I am a bit lost and can't seem to find anything on multioutputs for VGG16 or any other Keras pre-trained application.
I am using this as a guide: Transfer Learning in Keras with Computer Vision
# load model without classifier layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(IMG_HEIGHT, IMG_WIDTH,3))
# add new classifier layers
sex_x = base_model.output
sex_x = GlobalAveragePooling2D()(sex_x)
sex_x = Dropout(0.5)(gender_x)
output_sex = Dense(2, activation='sigmoid')(sex_x)
weight_x = base_model.output
weight_x = GlobalAveragePooling2D()(weight_x)
weight_x = Dropout(0.5)(weight_x)
output_weight = Dense(1, activation='linear')(weight_x)
# define new model
model = Model(inputs=base_model.inputs, outputs=[output_sex, output_weight])
model.compile(optimizer = 'adam', loss =['binary_crossentropy','mae',],metrics=['accuracy'])
history = model.fit(x_train,[y_train[:,0],y_train[:,1]],validation_data=(x_test,[y_test[:,0],y_test[:,1]]),epochs = 10, batch_size=128,shuffle = True)
Any ideas as to what I am doing wrong?
`
According to this error, your model input asks 3 channel RGB but your data seems to be in 1 channel (grayscale). Make sure you data has RGB images.
I have training data in the form of numpy arrays, that I will use in ConvLSTM.
Following are dimensions of array.
trainX = (5000, 200, 5) where 5000 are number of samples. 200 is time steps per sample, and 8 is number of features per timestep. (samples, timesteps, features).
out of these 8 features, 3 features remains the same throghout all timesteps in a sample (In other words, these features are directly related to samples). for example, day of the week, month number, weekday (these changes from sample to sample). To reduce the complexity, I want to keep these three features separate from initial training set and merge them with the output of convlstm layer before applying dense layer for classication (softmax activiation). e,g
Intial training set dimension would be (7000, 200, 5) and auxiliary input dimensions to be merged would be (7000, 3) --> because these 3 features are directly related to sample. How can I implement this using keras?
Following is my code that I write using Functional API, but don't know how to merge these two inputs.
#trainX.shape=(7000,200,5)
#trainy.shape=(7000,4)
#testX.shape=(3000,200,5)
#testy.shape=(3000,4)
#trainMetadata.shape=(7000,3)
#testMetadata.shape=(3000,3)
verbose, epochs, batch_size = 1, 50, 256
samples, n_features, n_outputs = trainX.shape[0], trainX.shape[2], trainy.shape[1]
n_steps, n_length = 4, 50
input_shape = (n_steps, 1, n_length, n_features)
model_input = Input(shape=input_shape)
clstm1 = ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu',return_sequences = True)(model_input)
clstm1 = BatchNormalization()(clstm1)
clstm2 = ConvLSTM2D(filters=128, kernel_size=(1,3), activation='relu',return_sequences = False)(clstm1)
conv_output = BatchNormalization()(clstm2)
metadata_input = Input(shape=trainMetadata.shape)
merge_layer = np.concatenate([metadata_input, conv_output])
dense = Dense(100, activation='relu', kernel_regularizer=regularizers.l2(l=0.01))(merge_layer)
dense = Dropout(0.5)(dense)
output = Dense(n_outputs, activation='softmax')(dense)
model = Model(inputs=merge_layer, outputs=output)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit([trainX, trainMetadata], trainy, validation_data=([testX, testMetadata], testy), epochs=epochs, batch_size=batch_size, verbose=verbose)
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
y = model.predict(testX)
but I am getting Value error at merge_layer statement. Following is the ValueError
ValueError: zero-dimensional arrays cannot be concatenated
What you are saying can not be done using the Sequential mode of Keras.
You need to use the Model class API Guide to Keras Model.
With this API you can build the complex model you are looking for
Here you have an example of how to use it: How to Use the Keras Functional API for Deep Learning
Hello I have a some question for keras.
currently i want implement some network
using same cnn model, and use two images as input of cnn model
and use two result of cnn model, provide to Dense model
for example
def cnn_model():
input = Input(shape=(None, None, 3))
x = Conv2D(8, (3, 3), strides=(1, 1))(input)
x = GlobalAvgPool2D()(x)
model = Model(input, x)
return model
def fc_model(cnn1, cnn2):
input_1 = cnn1.output
input_2 = cnn2.output
input = concatenate([input_1, input_2])
x = Dense(1, input_shape=(None, 16))(input)
x = Activation('sigmoid')(x)
model = Model([cnn1.input, cnn2.input], x)
return model
def main():
cnn1 = cnn_model()
cnn2 = cnn_model()
model = fc_model(cnn1, cnn2)
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x=[image1, image2], y=[1.0, 1.0], batch_size=1, ecpochs=1)
i want to implement model something like this, and train models
but i got error message like below :
'All layer names should be unique'
Actually i want use only one CNN model as feature extractor and finally use two features to predict one float value as 0.0 ~ 1.0
so whole system -->>
using two images and extract features from same CNN model, and features are provided to Dense model to get one floating value
Please, help me implement this system and how to train..
Thank you
See the section of the Keras documentation on shared layers:
https://keras.io/getting-started/functional-api-guide/
A code snippet from the documentation above demonstrating this:
# This layer can take as input a matrix
# and will return a vector of size 64
shared_lstm = LSTM(64)
# When we reuse the same layer instance
# multiple times, the weights of the layer
# are also being reused
# (it is effectively *the same* layer)
encoded_a = shared_lstm(tweet_a)
encoded_b = shared_lstm(tweet_b)
# We can then concatenate the two vectors:
merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)
# And add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)
# We define a trainable model linking the
# tweet inputs to the predictions
model = Model(inputs=[tweet_a, tweet_b], outputs=predictions)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([data_a, data_b], labels, epochs=10)