keras parametes for multilabel text classification - python-3.x

I am using keras in my multiclass text classifcation, the dataset contains 25000 arabic tweets with 10 class labels
I use this code :
model = Sequential()
model.add(Dense(512, input_shape=(10902,)))#10902
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
#categorical_crossentropy
model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
..
history = model.fit(X_train, y_train,
batch_size=100,
epochs=30,
verbose=1,
validation_split=0.5)
Summary:
Layer (type) Output Shape Param #
=================================================================
dense_23 (Dense) (None, 512) 5582336
_________________________________________________________________
activation_22 (Activation) (None, 512) 0
_________________________________________________________________
dropout_15 (Dropout) (None, 512) 0
_________________________________________________________________
dense_24 (Dense) (None, 512) 262656
_________________________________________________________________
activation_23 (Activation) (None, 512) 0
_________________________________________________________________
dropout_16 (Dropout) (None, 512) 0
_________________________________________________________________
dense_25 (Dense) (None, 10) 5130
_________________________________________________________________
activation_24 (Activation) (None, 10) 0
=================================================================
Total params: 5,850,122
Trainable params: 5,850,122
Non-trainable params: 0
but i get error:
could not convert string to float: 'food'
where food is a class name
when i change loss to categorical_crossentropy i get the error
Error when checking target: expected activation_24 to have shape (10,) but got array with shape (1,)
Update
'
nd=data.replace(['ads', 'Politic', 'eco', 'food', 'health', 'porno', 'religion', 'sports', 'tech','tv'],
[1, 2, 3, 4, 5,6,7,8,9,10])
model = Sequential()
model.add(Dense(512, input_shape=(10902,10)))#no. of words
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
#categorical_crossentropy
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
y_train=keras.utils.to_categorical(y_train)
history = model.fit(X_train, y_train,
batch_size=100,
epochs=30,
verbose=1,
validation_split=0.5)'

You correctly used Dense(10) at the end, in order to produce ten results, one for each class.
But you should have your output y_train shaped also with 10 classes.
It should have shape (numberOfTweets, 10).
For this you should:
If you have an array with indices, transform them using the keras function y_train=to_categorical(y_train).
If you have them as strings, you must transform them in indices, and then use to_categorical

Related

ValueError: expected dense_22 to have shape (None, 37) but got array with shape (1000, 2)

I am currently working on a question answering system. I create a synthetic dataset that contains multiple words in the answers. But, the answers are not a span of the given context.
Initially, I am planning to test it using a deep learning-based model. But I have some problems building the model.
This is how I vectorized data.
def vectorize(data, word2idx, story_maxlen, question_maxlen, answer_maxlen):
""" Create the story and question vectors and the label """
Xs, Xq, Y = [], [], []
for story, question, answer in data:
xs = [word2idx[word] for word in story]
xq = [word2idx[word] for word in question]
y = [word2idx[word] for word in answer]
#y = np.zeros(len(word2idx) + 1)
#y[word2idx[answer]] = 1
Xs.append(xs)
Xq.append(xq)
Y.append(y)
return (pad_sequences(Xs, maxlen=story_maxlen),
pad_sequences(Xq, maxlen=question_maxlen),
pad_sequences(Y, maxlen=answer_maxlen))
#np.array(Y))
below is how I create the model.
# story encoder. Output dim: (None, story_maxlen, EMBED_HIDDEN_SIZE)
story_encoder = Sequential()
story_encoder.add(Embedding(input_dim=vocab_size,
output_dim=EMBED_HIDDEN_SIZE,
input_length=story_maxlen))
story_encoder.add(Dropout(0.3))
# question encoder. Output dim: (None, question_maxlen, EMBED_HIDDEN_SIZE)
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
output_dim=EMBED_HIDDEN_SIZE,
input_length=question_maxlen))
question_encoder.add(Dropout(0.3))
# episodic memory (facts): story * question
# Output dim: (None, question_maxlen, story_maxlen)
facts_encoder = Sequential()
facts_encoder.add(Merge([story_encoder, question_encoder],
mode="dot", dot_axes=[2, 2]))
facts_encoder.add(Permute((2, 1)))
## combine response and question vectors and do logistic regression
answer = Sequential()
answer.add(Merge([facts_encoder, question_encoder],
mode="concat", concat_axis=-1))
answer.add(LSTM(LSTM_OUTPUT_SIZE, return_sequences=True))
answer.add(Dropout(0.3))
answer.add(Flatten())
answer.add(Dense(vocab_size,activation= "softmax"))
answer.compile(optimizer="rmsprop", loss="categorical_crossentropy",
metrics=["accuracy"])
answer.fit([Xs_train, Xq_train], Y_train,
batch_size=BATCH_SIZE, nb_epoch=NBR_EPOCHS,
validation_data=([Xs_test, Xq_test], Y_test))
and this is the summary of the model
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
merge_46 (Merge) (None, 5, 616) 0
_________________________________________________________________
lstm_23 (LSTM) (None, 5, 32) 83072
_________________________________________________________________
dropout_69 (Dropout) (None, 5, 32) 0
_________________________________________________________________
flatten_9 (Flatten) (None, 160) 0
_________________________________________________________________
dense_22 (Dense) (None, 37) 5957
=================================================================
Total params: 93,765.0
Trainable params: 93,765.0
Non-trainable params: 0.0
_________________________________________________________________
It gives the following error.
ValueError: Error when checking model target: expected dense_22 to have shape (None, 37) but got array with shape (1000, 2)
I think the error is related to Y_train, Y_test. I should encode them to categorical values and the answers are not spans of text, but sequential. I don't know what/how to do it.
how can I fix it? any ideas?
EDIT:
When I use sparse_categorical_crossentropy in the loss, and Reshape(2,-1);
answer.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
merge_94 (Merge) (None, 5, 616) 0
_________________________________________________________________
lstm_65 (LSTM) (None, 5, 32) 83072
_________________________________________________________________
dropout_139 (Dropout) (None, 5, 32) 0
_________________________________________________________________
reshape_22 (Reshape) (None, 2, 80) 0
_________________________________________________________________
dense_44 (Dense) (None, 2, 37) 2997
=================================================================
Total params: 90,805.0
Trainable params: 90,805.0
Non-trainable params: 0.0
_________________________________________________________________
EDIT2:
The model after modifications
# story encoder. Output dim: (None, story_maxlen, EMBED_HIDDEN_SIZE)
story_encoder = Sequential()
story_encoder.add(Embedding(input_dim=vocab_size,
output_dim=EMBED_HIDDEN_SIZE,
input_length=story_maxlen))
story_encoder.add(Dropout(0.3))
# question encoder. Output dim: (None, question_maxlen, EMBED_HIDDEN_SIZE)
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
output_dim=EMBED_HIDDEN_SIZE,
input_length=question_maxlen))
question_encoder.add(Dropout(0.3))
# episodic memory (facts): story * question
# Output dim: (None, question_maxlen, story_maxlen)
facts_encoder = Sequential()
facts_encoder.add(Merge([story_encoder, question_encoder],
mode="dot", dot_axes=[2, 2]))
facts_encoder.add(Permute((2, 1)))
## combine response and question vectors and do logistic regression
## combine response and question vectors and do logistic regression
answer = Sequential()
answer.add(Merge([facts_encoder, question_encoder],
mode="concat", concat_axis=-1))
answer.add(LSTM(LSTM_OUTPUT_SIZE, return_sequences=True))
answer.add(Dropout(0.3))
#answer.add(Flatten())
answer.add(keras.layers.Reshape((2, -1)))
answer.add(Dense(vocab_size,activation= "softmax"))
answer.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
answer.fit([Xs_train, Xq_train], Y_train,
batch_size=BATCH_SIZE, nb_epoch=NBR_EPOCHS,
validation_data=([Xs_test, Xq_test], Y_test))
It still gives
ValueError: Error when checking model target: expected dense_46 to have 3 dimensions, but got array with shape (1000, 2)
As far as I understand - Y_train, Y_test comprise of indexes (not one-hot vectors). If so - change loss to sparse_categorical_entropy:
answer.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
As far as I understand - Y_train, Y_test have a sequence dimension. And the length of questions (5) doesn't equal to the length of the answers (2). This dimension is removed by Flatten(). Try to replace Flatten() by Reshape():
# answer.add(Flatten())
answer.add(tf.keras.layers.Reshape((2, -1)))

Keras model not learning after training

I am training a keras model for a sentence classification task. The problem is although it is giving an accuracy of 94%, it is not learning anything. When I give a new sentence (not present in the dataset), it gives the same probability for it (in the model.prediction step). I can't figure out why is this happening.
Here is my model
model = Sequential()
model.add(Embedding(max_words, 30, input_length=max_len))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Bidirectional(LSTM(32)))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='sigmoid'))
model.summary()
Here max_words = 2000 and max_len=300
Here is the model summary
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) (None, 300, 30) 60000
_________________________________________________________________
batch_normalization_5 (Batch (None, 300, 30) 120
_________________________________________________________________
activation_5 (Activation) (None, 300, 30) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 300, 30) 0
_________________________________________________________________
bidirectional_3 (Bidirection (None, 64) 16128
_________________________________________________________________
batch_normalization_6 (Batch (None, 64) 256
_________________________________________________________________
activation_6 (Activation) (None, 64) 0
_________________________________________________________________
dropout_4 (Dropout) (None, 64) 0
_________________________________________________________________
dense_3 (Dense) (None, 2) 130
=================================================================
Total params: 76,634
Trainable params: 76,446
Non-trainable params: 188
And here is the code, the size of my dataset is 20k, with 10% in testing.
model.compile(loss='sparse_categorical_crossentropy', metrics=['accuracy'], optimizer = 'adam')
history = model.fit(sequences_matrix, Y_train, batch_size=256, epochs=50, validation_split=0.1)
Try changing activation function of the last layer from sigmoid to softmax. It doesn't quite match the loss you are using (categorical cross-entropy). If you use sigmoid, then you only need one unit and should use binary cross-entropy loss.

Error when checking input: expected dense_28_input to have 2 dimensions, but got array with shape (3084, 32, 32)

I'm trying to make a prediction with my model where shape of the array is (3084, 32, 32).
Getting value Error here is error image
Here is my model
model.add(Dense(1028, input_shape = (3084,), activation = "sigmoid"))
model.add(Dense(514, activation="sigmoid"))
model.add(Dense(len(lb.classes_), activation="softmax"))
summary
Model: "sequential_21"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_57 (Dense) (None, 1028) 3171380
_________________________________________________________________
dense_58 (Dense) (None, 514) 528906
_________________________________________________________________
dense_59 (Dense) (None, 4) 2060
=================================================================
Total params: 3,702,346
Trainable params: 3,702,346
Non-trainable params: 0
_________________________________________________________________
trying to fit using
opt = SGD(lr = 0.01)
model.compile(loss = "categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
H = model.fit(train_X, train_Y, validation_data = (test_X, test_Y), epochs = 75, batch_size = 32)
You need to specify the input shape correctly, the following model should work.
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
model = Sequential()
model.add(Dense(1028, input_shape = (32,32), activation = "sigmoid"))
model.add(Flatten())
model.add(Dense(514, activation="sigmoid"))
model.add(Dense(4, activation="softmax"))
model.summary()

Tensorflow invalid shape (InvalidArgumentError)

model.fit produces exception:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot update variable with shape [] using a Tensor with shape [32], shapes must be equal.
[[{{node metrics/accuracy/AssignAddVariableOp}}]]
[[loss/dense_loss/categorical_crossentropy/weighted_loss/broadcast_weights/assert_broadcastable/AssertGuard/pivot_f/_50/_63]] [Op:__inference_keras_scratch_graph_1408]
Model definition:
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(
input_shape=(360, 7)
))
model.add(tf.keras.layers.Conv1D(32, 1, activation='relu', input_shape=(360, 7)))
model.add(tf.keras.layers.Conv1D(32, 1, activation='relu'))
model.add(tf.keras.layers.MaxPooling1D(3))
model.add(tf.keras.layers.Conv1D(512, 1, activation='relu'))
model.add(tf.keras.layers.Conv1D(1048, 1, activation='relu'))
model.add(tf.keras.layers.GlobalAveragePooling1D())
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(32, activation='softmax'))
Input Features Shape
(105, 360, 7)
Input Labels Shape
(105, 32, 1)
Compile statement
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
Model.fit statement
model.fit(features,
labels,
epochs=50000,
validation_split=0.2,
verbose=1)
Any help would be much appreciated
You can use model.summary() to see your model architecture.
print(model.summary())
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 360, 32) 256
_________________________________________________________________
conv1d_1 (Conv1D) (None, 360, 32) 1056
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 120, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 120, 512) 16896
_________________________________________________________________
conv1d_3 (Conv1D) (None, 120, 1048) 537624
_________________________________________________________________
global_average_pooling1d (Gl (None, 1048) 0
_________________________________________________________________
dropout (Dropout) (None, 1048) 0
_________________________________________________________________
dense (Dense) (None, 32) 33568
=================================================================
Total params: 589,400
Trainable params: 589,400
Non-trainable params: 0
_________________________________________________________________
None
The shape of your output layer is required to be (None,32), but the shape of your labels is (105,32,1). So you need to change the shape to (105,32). np.squeeze() function is used when we want to remove single-dimensional entries from the shape of an array.
Use Flatten() before the Dense Layers.

How to fix expected dense_11 to have 4 dimensions error

I got some error when I was building a convolutional neural network with Keras:
Error when checking target: expected dense_11 to have 4 dimensions,
but got array with shape (48986, 12)
Since I lack knowledge, I have no idea what to fix. Can someone explain the reason and also suggest the solution?
input_shape = (99, 81, 1)
nclass = 12
model = Sequential()
model.add(Dense(32, input_shape=input_shape))
model.add(Convolution2D(8,3,3,activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Dense(128, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(nclass, activation='softmax'))
x_train, x_valid, y_train, y_valid = train_test_split(x_train, y_train, test_size=0.1, random_state=2017)
#vgg
batch_size = 128
nb_epoch = 1
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
#model.fit(x_train,y_train,nb_epoch= nb_epoch,batch_size = batch_size , validation_split=0.1)
model.fit(x_train, y_train, batch_size=16, validation_data=(x_valid, y_valid), epochs=3, shuffle=True, verbose=2)
model.save(os.path.join(model_path, 'vgg16.model'))
x_train has a shape of (99, 81, 1) and the nclass output should be 12.
Look at the error again:
"Error when checking target: expected dense_11 to have 4 dimensions, but got array with shape (48986, 12)" - target=labels/output
Meaning, there is some kind of problem with your output shape.
Lets print the model summary to check what is the expected output shape:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 99, 81, 32) 64
_________________________________________________________________
conv2d_1 (Conv2D) (None, 97, 79, 8) 2312
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 48, 39, 8) 0
_________________________________________________________________
dense_2 (Dense) (None, 48, 39, 128) 1152
_________________________________________________________________
dense_3 (Dense) (None, 48, 39, 128) 16512
_________________________________________________________________
dense_4 (Dense) (None, 48, 39, 12) 1548
=================================================================
Total params: 21,588
Trainable params: 21,588
Non-trainable params: 0
_________________________________________________________________
The final layer outputs predictions with shape: (None, 48,39,12).
You can see that this is happening because the Dense layer get input with shape (None, 48,39,8) and according to Keras implementation, Dense layer is places on top of the last dimension -> meaning: Dense layer with 128 nodes that gets input with shape (None,48,39,8) will outputs (None,48,39,128).
The solution depends on what you want to do and what is the shape of your labels (what the output should be).
For example, if the output shape of your model should be (nclass,1) than maybe you can Flatten the data after the MaxPool layer.
If it should be something else that change your labels shape to be (None, 48, 39, 12).
Good luck :)

Resources