I’m working with Keras on a sentence similarity task (using the STS dataset) and am having problems merging the layers. The data consists of 1184 sentence pairs each scored between 0 and 5. Below are the shapes of my numpy arrays. I’ve padded each of the sentences to 50 words and run them through and embedding layer, using the glove embedding’s with 100 dimensions. When merging the two networks I'm getting an error..
Exception: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 2 arrays:
Here is what my code looks like
total training data = 1184
X1.shape = (1184, 50)
X2.shape = (1184, 50)
Y.shape = (1184, 1)
embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=50,
trainable=False)
s1rnn = Sequential()
s1rnn.add(embedding_layer)
s1rnn.add(LSTM(128, input_shape=(100, 1)))
s1rnn.add(Dense(1))
s2rnn = Sequential()
s2rnn.add(embedding_layer)
s2rnn.add(LSTM(128, input_shape=(100, 1)))
s2rnn.add(Dense(1))
model = Sequential()
model.add(Merge([s1rnn,s2rnn],mode='concat'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='RMSprop', metrics=['accuracy'])
model.fit([X1,X2], Y,batch_size=32, nb_epoch=100, validation_split=0.05)
The problem is not with the merge layer. You need to create two embedding layers to feed in 2 different inputs.
The following modifications should work:
embedding_layer_1 = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=50,
trainable=False)
embedding_layer_2 = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=50,
trainable=False)
s1rnn = Sequential()
s1rnn.add(embedding_layer_1)
s1rnn.add(LSTM(128, input_shape=(100, 1)))
s1rnn.add(Dense(1))
s2rnn = Sequential()
s2rnn.add(embedding_layer_2)
s2rnn.add(LSTM(128, input_shape=(100, 1)))
s2rnn.add(Dense(1))
Related
I have a seq to seq model trained of some clever bot data:
justphrases_X is a list of sentences and justphrases_Y is a list of responses to those sentences.
maxlen = 62
#low is a list of all the unique words.
def Convert_To_Encoding(just_phrases):
encodings = []
for sentence in just_phrases:
onehotencoded = one_hot(sentence, len(low))
encodings.append(np.array(onehotencoded))
encodings_padded = pad_sequences(encodings, maxlen=maxlen, padding='post', value = 0.0)
return encodings_padded
encodings_X_padded = Convert_To_Encoding(just_phrases_X)
encodings_y_padded = Convert_To_Encoding(just_phrases_y)
model = Sequential()
embedding_layer = Embedding(len(low), output_dim=8, input_length=maxlen)
model.add(embedding_layer)
model.add(GRU(128)) # input_shape=(None, 496)
model.add(RepeatVector(numberofwordsoutput)) #number of characters?
model.add(GRU(128, return_sequences = True))
model.add(Flatten())
model.add(Dense(62, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer= 'adam', metrics=['accuracy'])
model.summary()
model.fit(encodings_X_padded, encodings_y_padded, batch_size = 1, epochs=1) #, validation_data = (testX, testy)
model.save("cleverbottheseq-uel.h5")
When I use this model for prediction, the output will be between 0 and 1 because of my use of softmax. However as I have around 3000 unique words, each with a separate integer assigned to it, how do I essentially repeat what the model did during training and convert the output back to an integer which has a word assigned to it?
I dont think it is possible to create seq2seq with Sequential API. Try to create encoder and decoder separately with Functional API. You need two inputs - first for encoder, second - for decoder.
I have multiple features to be used in LSTM with timesteps = 5 (n), when i run the model
def create_dataset(dataset, time_stamp=5):
dataX, dataY = [], []
print(len(dataset) - time_stamp - 1)
for i in range(len(dataset) - time_stamp - 1):
a = dataset[i:(i + time_stamp), :]
dataX.append(a)
dataY.append(dataset[i + time_stamp, 0])
return np.array(dataX), np.array(dataY)
model = Sequential()
model.add(LSTM(16,activation="relu",input_shape=(timeFrame,2), return_sequences=True))
model.add(Dense(1,activation='relu'))
model.compile(optimizer="adam",loss='mean_squared_error')
model.summary()
model.fit(X_train, Y_train, epochs=30, batch_size=1, verbose=1)
X_train.shape = (8,5,2) // The data sets have 8 different datasets with 5 timesteps and 2 features
Y_train.shape = (8) //
train_predict = model.predict(X_train)
train_predict.shape = (8,5,1)
when i try to do So this works fine ... now the problem is that when i run the following
scaler.inverse_transform(train_predict) //{ValueError}Found array with dim 3. Estimator expected <= 2.
scaler.inverse_transform(Y_train) //geting error
the issue is both have different dimension as two input feature is getting mapped with 1 feature. So inverse_transform does not work properly. How to fix this problem
goal is to find mse
[math.sqrt(mean_squared_error(Y_train, train_predict))]
I'm using Keras and Tensorflow to train a model that predicts a matching font based on an image of some letters. My folder contains data with a separate folder with each image of the letter in varying forms. My code for training the model looks like this:
LETTER_IMAGES_FOLDER = "datasets"
MODEL_FILENAME = "fonts_model.hdf5"
MODEL_LABELS_FILENAME = "model_labels.dat"
data = pd.read_csv('annotations.csv')
paths = list(data['Path'].values)
Y = list(data['Font'].values)
encoder = LabelEncoder()
encoder.fit(Y)
Y = encoder.transform(Y)
Y = np_utils.to_categorical(Y)
data = []
# loop over the input images
for image_file in paths:
# Load the image and convert it to grayscale
image = cv2.imread(image_file)
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Add a third channel dimension to the image to make Keras happy
image = np.expand_dims(image, axis=2)
# Add the letter image and it's label to our training data
data.append(image)
data = np.array(data, dtype="float") / 255.0
train_x, test_x, train_y, test_y = model_selection.train_test_split(data,Y,test_size = 0.1, random_state = 0)
# Save the mapping from labels to one-hot encodings.
# We'll need this later when we use the model to decode what it's predictions mean
with open(MODEL_LABELS_FILENAME, "wb") as f:
pickle.dump(encoder, f)
# Build the neural network!
model = Sequential()
# First convolutional layer with max pooling
model.add(Conv2D(20, (5, 5), padding="same", input_shape=(100, 100, 1), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Second convolutional layer with max pooling
model.add(Conv2D(50, (5, 5), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(500, activation="relu"))
print (len(encoder.classes_))
model.add(Dense(len(encoder.classes_), activation="softmax"))
# Ask Keras to build the TensorFlow model behind the scenes
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the neural network
model.fit(train_x, train_y, validation_data=(test_x, test_y), batch_size=32, epochs=2, verbose=1)
# Save the trained model to disk
model.save(MODEL_FILENAME)
Once the model has been created I'm predicting with it as follows:
predictions = model.predict(letter_image)
print (predictions) # this has the length of 1
The problem is that "predictions" is always an array of size 1 and I'm not sure why. I'm using softmax, categorical_crossentropy and my Dense value is greater than 1 in the last layer. Could someone please tell me why I'm not getting the top n predictions here?
I've also tried sigmoid with binary_crossentropy but get the same result. I think there's something more to it that I'm missing.
I'm trying to build a sequential model using Keras with an LSTM layer as the first layer. train_x has a shape of (21000, 2) and I'm using a batch size of 10
When I try
model = Sequential()
model.add(LSTM(128, activation='relu', input_shape=(
train_x.shape[1:]), return_sequences=True))
I get an error saying
Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2
Then I tried to change input_shape and set it to input_shape=(train_x.shape) and I got another error saying
Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (21000, 2)
What am I doing wrong?
Keras LSTM layer expects the input to be 3 dims as (batch_size, seq_length, input_dims), but you have assigned it wrong. Try this
input_dims = train_x.shape[1]
seq_length = #decide an integer
model = Sequential()
model.add(LSTM(128, activation='relu', input_shape=(seq_length, input_dims), return_sequences=True))
Also you need to reshape your data to three dims, where new dims will represent the sequence, as
I used toy dataset to show an example, here data and labels are of shape ((150, 4), (150,)) initially, using the following script:
seq_length = 10
dataX = []
dataY = []
for i in range(0, 150 - seq_length, 1):
dataX.append(data[i:i+seq_length])
dataY.append(labels[i+seq_length-1])
import numpy as np
dataX = np.reshape(dataX, (-1, seq_length, 4))
dataY = np.reshape(dataY, (-1, 1))
# dataX.shape, dataY.shape
Output: ((140, 10, 4), (140, 1))
Now you can safely feed it to model.
Note: I prepared dataset for many-to-one model, but you can use it appropriately.
i've a CRNN model for text recognition, it was published on Github, trained on english language,
Now i'm doing the same thing using this algorithm but for arabic.
My ctc function is:
def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
# the 2 is critical here since the first couple outputs of the RNN
# tend to be garbage:
y_pred = y_pred[:, 2:, :]
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
My Model is:
def get_Model(training):
img_w = 128
img_h = 64
# Network parameters
conv_filters = 16
kernel_size = (3, 3)
pool_size = 2
time_dense_size = 32
rnn_size = 128
if K.image_data_format() == 'channels_first':
input_shape = (1, img_w, img_h)
else:
input_shape = (img_w, img_h, 1)
# Initialising the CNN
act = 'relu'
input_data = Input(name='the_input', shape=input_shape, dtype='float32')
inner = Conv2D(conv_filters, kernel_size, padding='same',
activation=act, kernel_initializer='he_normal',
name='conv1')(input_data)
inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max1')(inner)
inner = Conv2D(conv_filters, kernel_size, padding='same',
activation=act, kernel_initializer='he_normal',
name='conv2')(inner)
inner = MaxPooling2D(pool_size=(pool_size, pool_size), name='max2')(inner)
conv_to_rnn_dims = (img_w // (pool_size ** 2), (img_h // (pool_size ** 2)) * conv_filters)
inner = Reshape(target_shape=conv_to_rnn_dims, name='reshape')(inner)
# cuts down input size going into RNN:
inner = Dense(time_dense_size, activation=act, name='dense1')(inner)
# Two layers of bidirectional GRUs
# GRU seems to work as well, if not better than LSTM:
gru_1 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru1')(inner)
gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru1_b')(inner)
gru1_merged = add([gru_1, gru_1b])
gru_2 = GRU(rnn_size, return_sequences=True, kernel_initializer='he_normal', name='gru2')(gru1_merged)
gru_2b = GRU(rnn_size, return_sequences=True, go_backwards=True, kernel_initializer='he_normal', name='gru2_b')(gru1_merged)
# transforms RNN output to character activations:
inner = Dense(num_classes+1, kernel_initializer='he_normal',
name='dense2')(concatenate([gru_2, gru_2b]))
y_pred = Activation('softmax', name='softmax')(inner)
Model(inputs=input_data, outputs=y_pred).summary()
labels = Input(name='the_labels', shape=[30], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
# Keras doesn't currently support loss funcs with extra parameters
# so CTC loss is implemented in a lambda layer
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred, labels, input_length, label_length])
# clipnorm seems to speeds up convergence
# the loss calc occurs elsewhere, so use a dummy lambda func for the loss
if training:
return Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out)
return Model(inputs=[input_data], outputs=y_pred)
Then i compile it with SGD optimizer (Tried SGD,adam)
sgd = SGD(lr=0.0000002, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=sgd)
Then i fit the model with my training set (Images of words up to 30 characters) into (sequence of labels of 30
model.fit_generator(generator=tiger_train.next_batch(),
steps_per_epoch=int(tiger_train.n / batch_size),
epochs=30,
callbacks=[checkpoint],
validation_data=tiger_val.next_batch(),
validation_steps=int(tiger_val.n / val_batch_size))
Once it starts, it give me loss = inf, after many searches, i didn't find any similar problem.
So my questions is, how can i solve this, what can make a ctc_loss compute an infinite cost?
Thanks in advance
I found the problem, it was dimensions problem,
For R-CNN OCR using CTC layer, if you are detecting a sequence with length n, you should have an image with at least a width of (2*n-1). The more the better till you reach the best image/timesteps ratio to let the CTC layer able to recognize the letter correctly. If image with is less than (2*n-1), it will give a nan loss.
This error is happened when image text have two equal characters in the same sequence e.g happen --> pp. for so that you can remove data that has this characteristic.