Why this simple keras 3 class classifier is predicting only one class instead of other classes? - keras

I am trying to create a simple 3 class deep learning classifier using keras as follows:
clf = Sequential()
clf.add(Dense(20, activation='relu', input_dim=NUM_OF_FEATURES))
clf.add(Dense(10, activation='relu'))
clf.add(Dense(3, activation='relu'))
clf.add(Dense(1, activation='softmax'))
# Model Compilation
clf.compile(optimizer = 'adam',
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
# Training the model
clf.fit(X_train,
y_train,
epochs=10,
batch_size=16,
validation_data=(X_val, y_val))
How after training while predicting, it is only predicting the same class (class 1) out of the 3 classes ALWAYS.
Is my network architecture not correct?
I am new to deep learning and AI.

If you want a network to classify three classes, your last dense layer should have three output nodes. In the example, the last dense layer has one output node.
clf = Sequential()
clf.add(Dense(20, activation='relu', input_dim=NUM_OF_FEATURES))
clf.add(Dense(10, activation='relu'))
clf.add(Dense(3, activation='relu'))
clf.add(Dense(3, activation='softmax'))
For each input sample, the output will be three values, all of which sum to one. These represent the probabilities that the input belongs to each class.
Regarding the loss function, if you want to use cross entropy, you have a choice between sparse categorical cross entropy and categorical cross entropy. The latter expects ground truth labels to be one-hot encoded (you can use tf.one_hot for this). In other words, the shape of the labels is the same as the shape as the network's output. Sparse categorical cross entropy, on the other hand, expects labels with a rank N-1, where N is the rank of the neural network's output. In order words, these are the labels before one-hot encoding.
When the model is used for inference, the predicted class values can be retrieved with argmax of the last dimension of the predictions.
predictions = clf.predict(x)
classes = predictions.argmax(-1)

Related

tensorflow sequential model outputting nan

Why is my code outputting nan? I'm using a sequential model with a 30x1 input vector and a single value output. I'm using tensorflow and python. This is one of my firs
While True:
# Define a simple sequential model
def create_model():
model = tf.keras.Sequential([
keras.layers.Dense(30, activation='relu',input_shape=(30,)),
keras.layers.Dense(12, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(7, activation='relu'),
keras.layers.Dense(1, activation = 'sigmoid')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
return model
# Create a basic model instance
model = create_model()
# Display the model's architecture
model.summary()
train_labels=[1]
test_labels=[1]
train_images= [[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]]
test_images=[[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]]
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images, test_labels),
verbose=1)
print('predicted:',model.predict(train_images))
You are using SparseCategoricalCrossentropy. It expects labels to be integers starting from 0. So, you have only one label 1, but it means you have at least two categories - 0 and 1. So you need at least two neurons in the last layer:
keras.layers.Dense(2, activation = 'sigmoid')
( If your goal is classification, you should maybe consider to use softmax instead of sigmoid, without from_logits=True )
You're using the wrong loss function for those labels. You need to use BinaryCrossentropy.
Change:
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
To:
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy,
metrics=[tf.keras.metrics.BinaryAccuracy()])

Hyperparameters tuning with keras tuner for classification problem

I an trying implement both classification problem and the regression problem with Keras tuner. here is my code for the regression problem:
def build_model(hp):
model = keras.Sequential()
for i in range(hp.Int('num_layers', 2, 20)):
model.add(layers.Dense(units=hp.Int('units_' + str(i),
min_value=32,
max_value=512,
step=32),
activation='relu'))
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.5))
# Tune whether to use dropout.
model.add(layers.Dense(1, activation='linear'))
model.compile(
optimizer=keras.optimizers.Adam(
hp.Choice('learning_rate', [1e-4, 1e-3, 1e-5])),
loss='mean_absolute_error',
metrics=['mean_absolute_error'])
return model
tuner = RandomSearch(
build_model,
objective='val_mean_absolute_error',
max_trials=5,
executions_per_trial=2,
# overwrite=True,
directory='projects',
project_name='Air Quality Index')
In order to apply this code for a classification problem, which parameters(loss, objective, metrices etc.) have to be changed?
To use this code for a classification problem, you will have to change the loss function, the objective function and the activation function of your output layer. Depending on the number of classes, you will use different functions:
Number of classes
Two
More than two
Loss
binary_crossentropy
categorical_crossentropy
Tuner objective
val_binary_crossentropy
val_categorical_crossentropy
Last layer activation
sigmoid
softmax

deep learning data preparation

I have a text dataset, that contains 6 classes. for each sample, I have the percent value and sum of the 6 percent values is 100% (features are related to each other). For example :
{A:16, B:35, C:7, D:0, E:3, F:40}
how can I feed a deep learning algorithm with this dataset?
I actually want the prediction to be exactly in the shape of training data.
Here is what you can do:
First of all, normalize all of your labels and scale them between 0-1.
Use a softmax layer for prediction.
Here is some code in Keras for intuition:
model = Sequential()
model.add(Dense(100, input_dim = x.shape[1], activation='relu'))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')

Is there any method to plot vector(matrix) values after each NN layer

I modified the existing activation function and using it in the Convolutional layer of the Neural Network. I would like to know how does it perform compared to the existing activation function.Is there any method/function to plot in a graph the results(matrix values) after each Neural network layer,so that I could customise my activation function according to the values for better results?
model = Sequential()
e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=max_length, trainable=False)
model.add(e)
model.add(Conv1D(64,kernel_size,padding='valid',activation=newactivation,strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(Conv1D(256,kernel_size,padding='valid',activation=newactivation,strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(Bidirectional(GRU(gru_output_size, dropout=0.2, recurrent_dropout=0.2)))
model.add(Bidirectional(LSTM(lstm_output_size)))
model.add(Dense(nclass, activation='softmax'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
print(model.summary())
model.fit(padded_docs,y_train, epochs=epoch_size, verbose=0)
loss, accuracy = model.evaluate(tpadded_docs, y_test, verbose=0)
I cannot comment yet so I post this as an answer:
Refer to the Keras FAQ: "How can I obtain the output of an intermediate layer?"
It shows you how you can access the output of each layer. If you are using the version that uses the keras function, you can even access the output of the model in the learning phase (if your model contains layer that behave differently in training vs. testing).

Accuracy goes to 0.0000 when training RNN with Keras?

I'm trying to use custom word-embeddings from Spacy for training a sequence -> label RNN query classifier. Here's my code:
word_vector_length = 300
dictionary_size = v.num_tokens + 1
word_vectors = v.get_word_vector_dictionary()
embedding_weights = np.zeros((dictionary_size, word_vector_length))
max_length = 186
for word, index in dictionary._get_raw_id_to_token().items():
if word in word_vectors:
embedding_weights[index,:] = word_vectors[word]
model = Sequential()
model.add(Embedding(input_dim=dictionary_size, output_dim=word_vector_length,
input_length= max_length, mask_zero=True, weights=[embedding_weights]))
model.add(Bidirectional(LSTM(128, activation= 'relu', return_sequences=False)))
model.add(Dense(v.num_labels, activation= 'sigmoid'))
model.compile(loss = 'binary_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
model.fit(X_train, Y_train, batch_size=200, nb_epoch=20)
here the word_vectors are stripped from spacy.vectors and have length 300, the input is an np_array which looks like [0,0,12,15,0...] of dimension 186, where the integers are the token ids in the input, and I've constructed the embedded weight matrix accordingly. The output layer is [0,0,1,0,...0] of length 26 for each training sample, indicating the label that should go with this piece of vectorized text.
This looks like it should work, but during the first epoch the training accuracy is continually decreasing... and by the end of the first epoch/for the rest of training, it's exactly 0 and I'm not sure why this is happening. I've trained plenty of models with keras/TF before and never encountered this issue.
Any idea what might be happening here?
Are the labels always one-hot? Meaning only one of the elements of the label vector is one and the rest zero.
If so, then maybe try using a softmax activation with a categorical crossentropy loss like in the following official example:
https://github.com/fchollet/keras/blob/master/examples/babi_memnn.py#L202
This will help constraint the network to output probability distributions on the last layer (i.e. the softmax layer outputs sum up to 1).

Resources