CTCBeamSearchDecoder thinks sequence_length of shape (2,) is not a vector - keras

Trying to run a beam search in a Keras model, I get confusing (and conflicting?) error messages. My model has inputs such as
inputs = Input(name='spectrograms',
shape=(None, hparams["n_spectrogram"]))
input_length = Input(name='len_spectrograms',
shape=[1], dtype='int64')
and the CTC loss function requires the [1] shapes in input and label length. As far as I understand, the output should be obtained with something like
# Stick connectionist temporal classification on the end of the core model
paths = K.function(
[inputs, input_length],
K.ctc_decode(output, input_length, greedy=False, top_paths=4)[0])
but as-is, that leads to a complain about the shape of input_length
ValueError: Shape must be rank 1 but is rank 2 for 'CTCBeamSearchDecoder' (op: 'CTCBeamSearchDecoder') with input shapes: [?,?,44], [?,1].
but if I chop off that dimension
K.ctc_decode(output, input_length[..., 0], greedy=False, top_paths=4)[0])
the model definition runs, but when I run y = paths([x, numpy.array([[30], [30]])]) with a x.shape == (2, 30, 513) I suddenly get
tensorflow.python.framework.errors_impl.InvalidArgumentError: sequence_length is not a vector
[[{{node CTCBeamSearchDecoder}} = CTCBeamSearchDecoder[beam_width=100, merge_repeated=true, top_paths=4, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Log, ToInt32)]]
What am I doing wrong?

Related

Keras LSTM expects 3 dimensions when I give it 2, but 4 dimensions when I give it 3

This gives me an error, telling me it expected 3 dimensions but got 2:
input_layer = Input(shape=(None, 1000000))
lstm_1 = LSTM(500, dropout=.2, recurrent_dropout=.2)(input_layer)
Either of these gives me an error, telling me it expected 4 dimensions but got 3:
input_layer = Input(shape=(None, 1000000, None))
input_layer = Input(shape=(None, None, 1000000))
The input shape parameter doesn't take into account the batch size, so really giving shape=(None, 1000) is expecting (batch_size, None, 100) and it becomes 3 dimensional. As a result you need to feed data of shape (samples, timesteps, features), so a 3D data input for fit function.

Resnet with Custom Data

I am trying to modify Resnet50 with my custom data as follows:
X = [[1.85, 0.460,... -0.606] ... [0.229, 0.543,... 1.342]]
y = [2, 4, 0, ... 4, 2, 2]
X is a feature vector of length 2000 for 784 images. y is an array of size 784 containing the binary representation of labels.
Here is the code:
def __classifyRenet(self, X, y):
image_input = Input(shape=(2000,1))
num_classes = 5
model = ResNet50(weights='imagenet',include_top=False)
model.summary()
last_layer = model.output
# add a global spatial average pooling layer
x = GlobalAveragePooling2D()(last_layer)
# add fully-connected & dropout layers
x = Dense(512, activation='relu',name='fc-1')(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu',name='fc-2')(x)
x = Dropout(0.5)(x)
# a softmax layer for 5 classes
out = Dense(num_classes, activation='softmax',name='output_layer')(x)
# this is the model we will train
custom_resnet_model2 = Model(inputs=model.input, outputs=out)
custom_resnet_model2.summary()
for layer in custom_resnet_model2.layers[:-6]:
layer.trainable = False
custom_resnet_model2.layers[-1].trainable
custom_resnet_model2.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
clf = custom_resnet_model2.fit(X, y,
batch_size=32, epochs=32, verbose=1,
validation_data=(X, y))
return clf
I am calling to function as:
clf = self.__classifyRenet(X_train, y_train)
It is giving an error:
ValueError: Error when checking input: expected input_24 to have 4 dimensions, but got array with shape (785, 2000)
Please help. Thank you!
1. First, understand the error.
Your input does not match the input of ResNet, for ResNet, the input should be (n_sample, 224, 224, 3) but you are having (785, 2000). From your question, you have 784 images with array of size 2000, which doesn't really align with the original ResNet50 input shape of (224 x 224) no matter how you reshape it. That means you cannot use the ResNet50 directly with your data. The only thing you did in your code is to take the last layer of ResNet50 and added you output layer to align with your output class size.
2. Then, what you can do.
If you insist to use the ResNet architecture, you will need to change the input layer rather than output layer. Also, you will need to reshape your image data to utilize the convolution layers. That means, you cannot have it in a (2000,) array, but need to be something like (height, width, channel), just like what ResNet and other architectures are doing. Of course you will also need to change the output layer as well just like you did so that you are predicting for your classes. Try something like:
model = ResNet50(input_tensor=image_input_shape, include_top=True,weights='imagenet')
This way, you can specify customized input image shape. You can check the github code for more information (https://github.com/keras-team/keras/blob/master/keras/applications/resnet50.py). Here's part of the docstring:
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `channels_last` data format)
or `(3, 224, 224)` (with `channels_first` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 197.
E.g. `(200, 200, 3)` would be one valid value.

Keras input dim error

while Iam experimenting with keras and Gym of Openai and I keep getting this error
ValueError: Error when checking input: expected reshape_1_input to have shape (None, 979, 1) but got array with shape (979, 1, 1)
I gather my Data as follow:
def getData():
rewardc = 0
rewardo = 0
labels = np.array([])
data = np.array([])
for i in range(11):
print("run",i)
for _ in range (10000):
print("---------------------------------------------------------------------------")
print("action", _)
#env.render()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
env.reset()
break
rewardc = rewardo - reward
rewardo = reward
observationo = observation
rewardco = rewardc
ohobservation = np.array(observationo)
ohobservation = np.append(ohobservation, rewardo)
ohobservation = np.append(ohobservation, rewardco)
#print ("whole observation",ohobservation)
#print("data", data)
labelsb = np.array([action])
if labels.size == 0:
labels = labelsb
else:
labels = np.vstack((labels,action))
if data.size == 0:
data = ohobservation
else:
data = np.vstack((data, ohobservation))
return labels, data
My x array will look like that:
[ [2] [0] [2] [3] [0] [0] .. [2] [3]]
My Y:
Y [[ 1.15792274e-02 9.40991027e-01 5.85608387e-01 ..., 0.00000000e+00
-5.27112172e-01 5.27112172e-01]
[ 1.74466133e-02 9.40591342e-01 5.95346880e-01 ..., 0.00000000e+00
-1.88372436e+00 1.35661219e+00]
[ 2.32508659e-02 9.39789397e-01 5.87415648e-01 ..., 0.00000000e+00
-4.41631844e-02 -1.83956118e+00]
Network Code:
model = Sequential()
model.add(Dense(units= 64, input_dim= 100))
model.add(Activation('relu'))
model.add(Dense(units=10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.fit(X,Y, epochs=5)
But I cannot feed it in Keras at any Chance.
It would be awesome if somebody could help me solve it thank you!
Inputs
If your data is 979 examples, each example containing one element, make sure that its first dimension is 979
print(X.shape) #confirm that the shape is (979,1) or (979,)
If the shape is different from that, you will have to reshape the array, because the Dense layer expects shapes in those forms.
X = X.reshape((979,))
Now, make sure that your Dense layer is compatible with that shape:
#using input_dim:
Dense(units=64, input_dim=1) #each example has only one element
#or, using input_shape:
Dense(units=64, input_shape=(1,)) #input_shape must always be a tuple. Again, the number of examples shouldn't be a part of this shape
This will solve the problems you have with the inputs. All the error messages you get like this one are about the compatibility between your input data and the input shape you gave to the first layer:
Error when checking input: expected reshape_1_input to have shape (None, 979, 1)
but got array with shape (979, 1, 1)
The first shape in the message is the input_shape you passed to the layer. The second is the shape of your actual data.
Outputs
The same compatibility is necessary for Y, but now with the last layer.
If you put units=10 in the last layer, it means your labels must be of shape (979,10).
If your labels don't have that shape, adjust the number of cells to match it.

InvalidArgumentError: logits and labels must have the same first dimension seq2seq Tensorflow

I am getting this error in seq2seq.sequence_loss even though first dim of logits and labels has same dimension, i.e. batchSize
I have created a seq2seq model in TF 1.0 version. My loss function is as follows :
logits = self.decoder_logits_train
targets = self.decoder_train_targets
self.loss = seq2seq.sequence_loss(logits=logits, targets=targets, weights=self.loss_weights)
self.train_op = tf.train.AdamOptimizer().minimize(self.loss)
I am getting following error on running my network while training :
InvalidArgumentError (see above for traceback): logits and labels must have the same first dimension, got logits shape [1280,150000] and labels shape [1536]
[[Node: sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](sequence_loss/Reshape, sequence_loss/Reshape_1)]]
I confirm the shapes of logits and targets tensors as follows :
a,b = sess.run([model.decoder_logits_train, model.decoder_train_targets], feed_dict)
print(np.shape(a)) # (128, 10, 150000) which is (BatchSize, MaxSeqSize, Vocabsize)
print(np.shape(b)) # (128, 12) which is (BatchSize, Max length of seq including padding)
So, since the first dimension of targets and logits are same then why I am getting this error ?
Interestingly, in error u can observe that the dimension of logits is mentioned as (1280, 150000), which is (128 * 10, 150000) [product of first two dimension, vocab_size], and same for targets i.e. (1536), which is (128*12), again product of first two dimension ?
Note : Tensorflow 1.0 CPU version
maybe your way of padding wrong. if you padded _EOS to the end of target seq, then the max_length(real length of target sentence) should add 1 to be [batch, max_len+1]. Since you padded _GO and _EOS, your target sentence length should add 2, which makes it equals 12.
I read some other people's implementation of NMT, they only padded _EOS for target sentence, while _GO for input of decoder. Tell me if I'm wrong.
I had the same error as you and I understood the problem:
The problem:
You run the decoder using this parameters:
targets are the decoder_inputs. They have length max_length because of padding. Shape: [batch_size, max_length]
sequence_length are the non-padded-lengths of all the targets of your current batch. Shape: [batch_size]
Your logits, that are the output tf.contrib.seq2seq.dynamic_decode has shape:
[batch_size, longer_sequence_in_this_batch, n_classes]
Where longer_sequence_in_this_batch is equal to tf.reduce_max(sequence_length)
So, you have a problem when computing the loss because you try to use both:
Your logits with 1st dimension shape longer_sequence_in_this_batch
Your targets with 1st dimension shape max_length
Note that longer_sequence_in_this_batch <= max_length
How to fix it:
You can simply apply some padding to your logits.
logits = self.decoder_logits_train
targets = self.decoder_train_targets
paddings = [[0, 0], [0, max_length-tf.shape(logits)[1]], [0, 0]]
padded_logits = tf.pad(logits, paddings, 'CONSTANT', constant_values=0)
self.loss = seq2seq.sequence_loss(logits=padded_logits, targets=targets,
weights=self.loss_weights)
Using this method,you ensure that your logits will be padded as the targets and will have dimension [batch_size, max_length, n_classes]
For more information about the pad function, visit
Tensorflow's documentation
The error message seems to be a bit misleading, as you actually need first and second dimensions to be the same. This is written here:
logits: A Tensor of shape [batch_size, sequence_length,
num_decoder_symbols] and dtype float. The logits correspond to the
prediction across all classes at each timestep.
targets: A Tensor of shape [batch_size, sequence_length] and dtype
int. The target represents the true class at each timestep.
This also makes sense, as logits are probability vectors, while targets represent the real output, so they need to be of the same length.

How to handle variable shape bias in TensorFlow?

I was just modifying some an LSTM network I had written to print out the test error. The issues, I realized, is that the model I had defined depends on the batch size.
Specifically, the input is a tensor of shape [batch_size, time_steps, features]. The input enters the LSTM cell and the output, which I turn into a list of time_steps 2D tensors, with each 2D tensor having shape [batch_size, hidden_units]. Each 2D tensor is then multiplied by a weight vector of shape [hidden_units] to yield a vector of shape [batch_size] which has added to it a bias vector of shape [batch_size].
In words, I give the model N sequences, and I expect it to output a scalar for each time step for each sequence. That is, the output is a list of N vectors, one for each time step.
For training, I give the model batches of size 13. For the test data, I feed the entire data set, which consists of over 400 examples. Thus, an error is raised, since the bias has fixed shape batch_size.
I haven't found a way to make it's shape variable without raising an error.
I can add complete code if requested. Added code anyways.
Thanks.
def basic_lstm(inputs, number_steps, number_features, number_hidden_units, batch_size):
weights = {
'out': tf.Variable(tf.random_normal([number_hidden_units, 1]))
}
biases = {
'out': tf.Variable(tf.constant(0.1, shape=[batch_size, 1]))
}
lstm_cell = rnn.BasicLSTMCell(number_hidden_units)
init_state = lstm_cell.zero_state(batch_size, dtype=tf.float32)
hidden_layer_outputs, states = tf.nn.dynamic_rnn(lstm_cell, inputs,
initial_state=init_state, dtype=tf.float32)
results = tf.squeeze(tf.stack([tf.matmul(output, weights['out'])
+ biases['out'] for output
in tf.unstack(tf.transpose(hidden_layer_outputs, (1, 0, 2)))], axis=1))
return results
You want the biases to be a shape of (batch_size, )
For example (using zeros instead of tf.constant but similar problem), I was able to specify the shape as a single integer:
biases = tf.Variable(tf.zeros(10,dtype=tf.float32))
print(biases.shape)
prints:
(10,)

Resources