Hyperparameter optimization in images with Talos and flow_from_directory - keras

I tried to optimize the hyperparameters of my keras CNN made for image classification. I consider using grid search from sklearn and talos optimizer (https://github.com/autonomio/talos). I overcame the fundamental difficulty with making x and y out from flow_from_directory (code below), but... it still does not work! Any idea? Maybe someone faced the same problem.
def talos_model(train_flow, validation_flow, nb_train_samples, nb_validation_samples, params):
model = Sequential()
model.add(Conv2D(6,(5,5),activation="relu",padding="same",
input_shape=(img_width, img_height, 3)))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(params['dropout']))
model.add(Conv2D(16,(5,5),activation="relu"))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(params['dropout']))
model.add(Flatten())
model.add(Dense(120, activation='relu', kernel_initializer=params['kernel_initializer']))
model.add(Dropout(params['dropout']))
model.add(Dense(84, activation='relu', kernel_initializer=params['kernel_initializer']))
model.add(Dropout(params['dropout']))
model.add(Dense(10, activation='softmax'))
model.compile(loss=params['loss'],
optimizer=params['optimizer'],
metrics=['categorical_accuracy'])
checkpointer = ModelCheckpoint(filepath='talos_cnn.h5py',
monitor='val_categorical_accuracy', save_best_only=True)
history = model.fit_generator(generator=train_flow,
samples_per_epoch=nb_train_samples,
validation_data=validation_flow,
nb_val_samples=nb_validation_samples,
callbacks=[checkpointer],
verbose=1,
epochs=params['epochs'])
return history, model
train_generator = ImageDataGenerator(rescale=1/255)
validation_generator = ImageDataGenerator(rescale=1/255)
# Retrieve images and their classes for train and validation sets
train_flow = train_generator.flow_from_directory(directory=train_data_dir,
batch_size=batch_size,
target_size(img_height,img_width))
validation_flow = validation_generator.flow_from_directory(directory=validation_data_dir,
batch_size=batch_size,
target_size=(img_height,img_width),
shuffle = False)
#here I make x and y for talos
(X_train, Y_train) = train_flow.next()
#starting an experiment with talos
t = ta.Scan(x=X_train,
y=Y_train,
model=talos_model,
params=params,
dataset_name='landmarks',
experiment_no='1')
Error occurs in the last line:
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Related

Tensorflow structured data model.predict() returns incorrect probabilities

I'm trying to follow a Tensorflow tutorial (i'm a beginner) for structured data models with some changes along the way.
My purpose is to create a model to which i provide data (in csv format) that looks something like this (the example has only 2 features but i want to extend it after i figure it out):
power_0,power_1,result
0.2,0.3,draw
0.8,0.1,win
0.3,0.1,draw
0.7,0.2,win
0.0,0.4,lose
I created the model using the following code:
def get_labels(df, label, mapping):
raw_y_true = df.pop(label)
y_true = np.zeros((len(raw_y_true)))
for i, raw_label in enumerate(raw_y_true):
y_true[i] = mapping[raw_label]
return y_true
tf.compat.v1.enable_eager_execution()
mapping_to_numbers = {'win': 0, 'draw': 1, 'lose': 2}
data_frame = pd.read_csv('data.csv')
data_frame.head()
train, test = train_test_split(data_frame, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
train_labels = np.array(get_labels(train, label='result', mapping=mapping_to_numbers))
val_labels = np.array(get_labels(val, label='result', mapping=mapping_to_numbers))
test_labels = np.array(get_labels(test, label='result', mapping=mapping_to_numbers))
train_features = np.array(train)
val_features = np.array(val)
test_features = np.array(test)
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(train_features.shape[-1],)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(3, activation='sigmoid'),
])
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'],
run_eagerly=True)
epochs = 10
batch_size = 100
history = model.fit(
train_features,
train_labels,
epochs=epochs,
validation_data=(val_features, val_labels))
input_data_frame = pd.read_csv('input.csv')
input_data_frame.head()
input_data = np.array(input_data_frame)
print(model.predict(input_data))
input.csv looks as following:
power_0,power_1
0.8,0.1
0.7,0.2
And the actual result is:
[[0.00604381 0.00242573 0.00440606]
[0.01321151 0.00634229 0.01041476]]
I expected to get the probability for each label ('win', 'draw' and 'lose'), can anyone please help me with this?
Thanks in advance
Use softmax activation in this line tf.keras.layers.Dense(3, activation='sigmoid').
This works well for me with your example:
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(train_features.shape[-1],)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax'),
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
run_eagerly=True)
Using Flatten Layer
I have to write my suggestions here because i cant comment yet.
#zihaozhihao is right you have to use softmax instead of sigmoid because you dont work with a binary problem. Another problem might be your loss function which is:
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'],
run_eagerly=True)
Try to use loss='categorical_crossentropy',because you are working with a multilabel classification. You could read more about multilable classification here and here.
As for your propability question. You get the propability of each class for your two test inputs.For example:
win draw loss
[[0.00604381 0.00242573 0.00440606]
[0.01321151 0.00634229 0.01041476]]
The Problem is your loss function and the activation function which leads to strange propability values. You might want to check this post here for more information.
Hope this helps a little and feel free to ask.
Since it is a multiclass classification problem, please use categorical_crossentropy instead of binary_crossentropy for loss function, also use softmax instead of sigmoid as activation function.
Also, you should increase your epochs for getting better convergence.

Find Most Important Input from a Neural Network

I trained a neural network with 37 Inputs. It has around 85% accuracy. Is it possible for me to find out which Input has the most effect. I tried this code but I cannot figure out how to find most important Input
weights = model.layers[0].get_weights()[0]
biases = model.layers[0].get_weights()[1]
One possible solution is to wrap your model with keras.wrappers.scikit_learn and then use Recursive Feature elimination in scikit-learn:
def create_model():
# create model
model = Sequential()
model.add(Dense(512, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, epochs=100, batch_size=128, verbose=0)
rfe = RFE(estimator=model, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
If you need to visualize weights see here.

Wrong dimensions when trying to use model.predict() from Keras

I think the code will speak for itself, but i trained a model, that i now wanna use to predict on some new input data. The new input data seems to be the wrong dimensions though. Below you can see the code and error messages for both the model and the predicting (attempted)
tokenizer = Tokenizer(num_words=10000)
df = pd.read_csv('/home/paperspace/Sentiment Analysis Dataset.csv', index_col = 0,
error_bad_lines = False)
y = list(df['Sentiment'])
tokenizer.fit_on_texts(list(df['SentimentText']))
X = tokenizer.texts_to_sequences(list(df['SentimentText']))
X = pad_sequences(X)
print("Done, fitting on texts.")
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, shuffle = True)
model = Sequential()
#Creates the wordembeddings.
embedding_vector_dim = 32
model.add(Embedding(10000, embedding_vector_dim, input_length=X.shape[1]))
model.add(Dropout(0.2))
model.add(LSTM(128))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
model.fit(numpy.array(X_train), numpy.array(y_train),
batch_size=128,
epochs=1,
validation_data=(numpy.array(X_test), numpy.array(y_test)))
score, acc = model.evaluate(numpy.array(X_test),numpy.array(y_test),
batch_size=128)
model.save('./sentiment_seq.h5')
print('Test score:', score)
print('Test accuracy:', acc)
Now for the trying to predict and error message.
text = "this is actually a very bad movie."
tokenizer = Tokenizer()
tokenizer.fit_on_texts(list(text))
X = tokenizer.texts_to_sequences(list(text))
X = pad_sequences(X)
X_flat = np.array([X.flatten()])
model = load_model('sentiment_test.h5')
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print(model.predict(X, batch_size = 1, verbose = 1))
ValueError: Error when checking : expected embedding_1_input to have shape (None, 116) but got array with shape (1, 38)
So basically why am i getting this error, when preprocessing is the same when training and predicting, and how can i know what the expected input should be before seeing the error message?
If you're not working with a fixed input length, you should not define an input_length in the embedding layer.

Error in last layer of neural network

#10-Fold split
seed = 7
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
np.random.seed(seed)
cvscores = []
act = 'relu'
for train, test in kfold.split(X, Y):
model = Sequential()
model.add(Dense(43, input_shape=(8,)))
model.add(Activation(act))
model.add(Dense(500))
model.add(Activation(act))
#model.add(Dropout(0.4))
model.add(Dense(1000))
model.add(Activation(act))
#model.add(Dropout(0.4))
model.add(Dense(1500))
model.add(Activation(act))
#model.add(Dropout(0.4))
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
hist = model.fit(X[train], Y[train],
epochs=500,
shuffle=True,
batch_size=100,
validation_data=(X[test], Y[test]), verbose=2)
#model.summary()
When I call model.fit it reports the following error :
ValueError: Error when checking target: expected activation_5 to have shape (None, 2) but got array with shape (3869, 1)
I am using keras with TensorFlow backend. Please ask for any further clarification if needed.
The problem was solved when we used this statement
y = to_categorical(Y[:])

ValueError when Fine-tuning Inception_v3 in Keras

I am trying to fine-tune pre-trained Inceptionv3 in Keras for a multi-label (17) prediction problem.
Here's the code:
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a new top layer
x = base_model.output
predictions = Dense(17, activation='sigmoid')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# we need to recompile the model for these modifications to take effect
# we use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(loss='binary_crossentropy', # We NEED binary here, since categorical_crossentropy l1 norms the output before calculating loss.
optimizer=SGD(lr=0.0001, momentum=0.9))
# Fit the model (Add history so that the history may be saved)
history = model.fit(x_train, y_train,
batch_size=128,
epochs=1,
verbose=1,
callbacks=callbacks_list,
validation_data=(x_valid, y_valid))
But I got into the following error message and had trouble deciphering what it is saying:
ValueError: Error when checking target: expected dense_1 to have 4
dimensions, but got array with shape (1024, 17)
It seems to have something to do with that it doesn't like my one-hot encoding for the labels as target. But how do I get 4 dimensions target?
It turns out that the code copied from https://keras.io/applications/ would not run out-of-the-box.
The following post has helped me:
Keras VGG16 fine tuning
The changes I need to make are the following:
Add in the input shape to the model definition base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(299,299,3)), and
Add a Flatten() layer to flatten the tensor output:
x = base_model.output
x = Flatten()(x)
predictions = Dense(17, activation='sigmoid')(x)
Then the model works for me!

Resources