Unable to understand format of test data while evaluating training model - python-3.x

I am training a regression model that takes approximates the weights for the equation :
Y = R+B+G
For this, I provide pre-determined values of R, B and G and Y, as training data.
R = np.array([-4, -10, -2, 8, 5, 22, 3], dtype=float)
B = np.array([4, -10, 0, 0, 15, 5, 1], dtype=float)
G = np.array([0, 10, 5, 8, 1, 2, 38], dtype=float)
Y = np.array([0, -10, 3, 16, 21, 29, 42], dtype=float)
The training batch consisted of 1x3 array corresponding to Ith value of R, B and G.
RBG = np.array([R,B,G]).transpose()
print(RBG)
[[ -4. 4. 0.]
[-10. -10. 10.]
[ -2. 0. 5.]
[ 8. 0. 8.]
[ 5. 15. 1.]
[ 22. 5. 2.]
[ 3. 1. 38.]]
I used a neural network with 3 inputs, 1 dense layer (hidden layer) with 2 neurons and the output layer (output) with a single neuron.
hidden = tf.keras.layers.Dense(units=2, input_shape=[3])
output = tf.keras.layers.Dense(units=1)
Further, I trained the model
model = tf.keras.Sequential([hidden, output])
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
history = model.fit(RBG,Y, epochs=500, verbose=False)
print("Finished training the model")
The loss vs epoch plot was as normal, decreasing and then flat.
But when I tested the model, using random values of R, B and G as
print(model.predict([[1],[1],[1]]))
expecting the output to be 1+1+1 = 3, but got the Value Error:
ValueError: Error when checking input: expected dense_2_input to have shape (3,) but got array with shape (1,)
Any idea where I might be getting wrong?
Surprisingly, the only input it responds to, is the training data itself. i.e,
print(model.predict(RBG))
[[ 2.1606684e-07]
[-3.0000000e+01]
[-3.2782555e-07]
[ 2.4000002e+01]
[ 4.4999996e+01]
[ 2.9000000e+01]
[ 4.2000000e+01]]

As the error says, the problem is in your shape of the input. You need to transpose [[1],[1],[1]] this input then you will have the shape that is expected by the model.
so npq = np.array([[1],[1],[1]]).transpose() and now feed this to model.predict(npq)

Related

TorchMetrics MultiClass accuracy for semantic segmentation

Let's use the following example for a semantic segmentation problem using TorchMetrics, where we predict tensors of shape (batch_size, classes, height, width):
# shape: (1, 3, 2, 2) => (batch_size, classes, height, width)
mask_multiclass_pred = torch.tensor(
[[
[
# predictions for first class per pixel
[0.85, 0.4],
[0.4, 0.3],
],
[
# predictions for second class per pixel
[0, 0.8],
[0, 1],
],
[
# predictions for third class per pixel
[0.8, 0.6],
[0.7, 0.3],
]
]],
dtype=torch.float32
)
Obviously, if we reduce this to the actual predicted classes as an index tensor:
reduced_pred = torch.argmax(mask_multiclass_pred, dim=1)
reduced_pred = torch.where(torch.amax(mask_multiclass_pred, dim=1) >= 0.5, reduced_pred, -1)
We get:
# shape: (1, 2, 2) => (batch_size, height, width)
tensor([[[0, 1],
[2, 1]]])
...for the predictions.
Let's supposed the following would be our ground truth for the labels, in shape (batch_size, height, width) the MulticlassAccuracy documentation suggests the targets should be (N, ...), thus only batch_size and ... -> extra dimensions, which in semantic segmentation is height & width:
# shape: (1, 2, 2) => (batch_size, height, width)
# as suggested by TorchMetrics targets should be (N, ...) where ... is the extra dimensions, in this case 2D => class per pixel
mask_multiclass_gt = torch.tensor(
[
[
# class 0, 1, or 2 per pixel => (2, 2) shape for mask
[0, 1],
[0, 2],
],
],
dtype=torch.int
)
Now, if we calculate the MulticlassAccuracy:
seg_acc_cls = MulticlassAccuracy(num_classes=3, top_k=1, average="none", multidim_average="global")
seg_acc_cls(mask_multiclass_pred, mask_multiclass_gt)
We get the following result:
# shape (3,) => one accuracy per class (3 classes)
tensor([0.5000, 1.0000, 0.0000])
Why is this the output?
For example, shouldn't the first class be 0.75 instead of 0.5? Because for the default threshold of 0.5 our reduced predictions for the first class would be:
[0, 1] => [True, False]
[2, 1] => [False, False]
And obviously then we have 1 TP, 2 TN, and 1 FN. So we should have (1+2)/4?!
Likewise, the second class would be:
[0, 1] => [False, True]
[2, 1] => [False, True]
So again, we have 1 TP, but also 1 FP (lower right), and then 2 TN, which again should be (1 TP + 2TN)/4 = 0.75 and not 1.0.
For the 3rd class we would get these reduced predictions:
[0, 1] => [False, False]
[2, 1] => [True, False]
Which should be 0 TP (only lower right was True), 1 FP (lower left), and 2 TN should be 2/4 => 0.5.
Seems like you're having mostly a definitional issue here. Multiclass classification accuracy, (at least as defined in this package) is simply the class recall for each class i.e. TP/(TP+FN). True negatives are not taken into account in the scoring, or else sparse classes would have their accuracy dominated almost entirely by false negatives and would be fairly insensitive to the actual performance (TP and FN). For this metric, false positives do not directly impact accuracy (although, since it is multiclass and not a multilabel problem each pixel can have only one class, meaning that a FP in one class indirectly causes a FN in another class so FP are still reflected in the score).
Personally I find these multi-class / multi-label classification tasks especially on segmentation to be complex enough and metric definitions variable enough that I generally just re-implement them myself so I know what it is I'm calculating.

Ignore padding class (0) during multi class classification

I have a problem where given a set of tokens, predict another token. For this task I use an embedding layer with Vocab-size + 1 as input_size. The +1 is because the sequences are padded with zeros. Eg. given a Vocab-size of 10 000 and max_sequence_len=6, x_train looks like:
array([[ 0, 0, 0, 11, 22, 4],
[ 29, 6, 12, 29, 1576, 29],
...,
[ 0, 0, 67, 8947, 7274, 7019],
[ 0, 0, 0, 15, 10000, 50]])
y_train consists of integers between 1 and 10000, with other words, this becomes a multi-class classification problem with 10000 classes.
My problem: When I specify the output size in the output layer, I would like to specify 10000, but the model will predict the classes 0-9999 if I do this. Another approach is to set output size to 10001, but then the model can predict the 0-class (padding), which is unwanted.
Since y_train is mapped from 1 to 10000, I could remap it to 0-9999, but since they share mapping with the input, this seems like an unnecessary workaround.
EDIT:
I realize, and which #Andrey pointed out in the comments, that I could allow for 10001 classes, and simply add padding to the vocabulary, although I am never interested in the network predicting 0's.
How can I tell the model to predict on the labels 1-10000, whilst at the meantime have 10000 classes, not 10001?
I would use the following approach:
import tensorflow as tf
inputs = tf.keras.layers.Input(shape=())
x = tf.keras.layers.Embedding(10001, 512)(inputs) # input shape of full vocab size [10001]
x = tf.keras.layers.Dense(10000, activation='softmax')(x) # training weights based on reduced vocab size [10000]
z = tf.zeros(tf.shape(x)[:-1])[..., tf.newaxis]
x = tf.concat([z, x], axis=-1) # add constant zero on the first position (to avoid predicting 0)
model = tf.keras.Model(inputs=inputs, outputs=x)
inputs = tf.random.uniform([10, 10], 0, 10001, dtype=tf.int32)
labels = tf.random.uniform([10, 10], 0, 10001, dtype=tf.int32)
model.compile(loss='sparse_categorical_crossentropy')
model.fit(inputs, labels)
pred = model.predict(inputs) # all zero positions filled by 0 (which is minimum value)

CNN Autoencoder with Embedding(300D GloveVec) layer for 10-15 word sentence not working problem due to padding

Using pretraining GloveVector from stanford to get the meaningful representation of each word but i want representations for a sentence containing 5-15 words, so that i can make use of cosine similarity to do a match when i receive a new sentence. I am setting a 15 words (fixed size) of each sentence and applied embedding layer then the new input shape is going to be 15 X 300 dimensions (If i have less than 15 words then padded values to make it 15 words (one random uniform distribution of 300D vector)
Below are my network shapes
[None, 15] -- Raw inputs embedding and padded(1) ID's
[None, 15, 300, 1], --input
[None, 8, 150, 128], -- conv 1
[None, 4, 75, 64], -- conv 2
[None, 2, 38, 32], -- conv 3
[None, 1, 19, 16], -- conv 4
[None, 1, 10, 4] -- conv 5
[None, 50] ---------Latent shape (new meaningful representati)------
[None, 1, 10, 4] -- encoded input for de-conv
[None, 1, 19, 16], -- conv_trans 5
[None, 2, 38, 32], -- conv_trans 4
[None, 4, 75, 64], -- conv_trans 3
[None, 8, 150, 128], -- conv_trans 2
[None, 15, 300, 1] -- conv_trans 1 -- for loss funtion with input
I have tried the CNN model with embedding layer in tensorflow
self._inputs = tf.placeholder(dtype=tf.int64, shape=[None, self.sent_len], name='input_x') #(?,15)
losses = []
# lookup layer
with tf.variable_scope('embedding') as scope:
self._W_emb = _variable_on_cpu(name='embedding', shape=[self.vocab_size, self.emb_size], initializer=tf.random_uniform_initializer(minval=-1.0, maxval=1.0))
# assigned pretrained embedding here, so initializer would be overrided
sent_batch = tf.nn.embedding_lookup(params=self._W_emb, ids=self._inputs)
sent_batch = tf.expand_dims(sent_batch, -1)
self._x = sent_batch
encoder = []
shapes = []
current_input = sent_batch
shapes.append(current_input.get_shape().as_list())
for layer_i, n_output in enumerate(n_filters[1:]):
with tf.variable_scope('Encode_conv-%d' % layer_i) as scope:
n_input = current_input.get_shape().as_list()[3]
W, wd = _variable_with_weight_decay('W-%d' % layer_i, shape=[filter_size,filter_size,n_input,n_output],
initializer=tf.random_uniform_initializer(minval=-1.0, maxval=1.0), wd=self.l2_reg)
losses.append(wd)
biases = _variable_on_cpu('bias-%d' % layer_i, shape=[n_output], initializer=tf.constant_initializer(0.00))
encoder.append(W)
output = tf.nn.relu(tf.add(tf.nn.conv2d(current_input, W, strides=[1, 2, 2, 1], padding='SAME'), biases), name=scope.name)
current_input = output
shapes.append(output.get_shape().as_list())
#z = current_input
original_shape = current_input.get_shape().as_list()
flatsize = original_shape[1]*original_shape[2]*original_shape[3]
height,width,channel = original_shape[1]*1,original_shape[2]*1,original_shape[3]*1
current_input = tf.reshape(current_input,[-1,flatsize])
with tf.variable_scope('Encode_Z-%d' % layer_i) as scope:
W_en, wd_en = _variable_with_weight_decay('W', shape=[current_input.get_shape().as_list()[1], outsize],
initializer=tf.truncated_normal_initializer(stddev=0.05),
wd=self.l2_reg)
losses.append(wd_en)
biases_en = _variable_on_cpu('bias', shape=[outsize],initializer=tf.constant_initializer(0.00))
self._z = tf.nn.relu(tf.nn.bias_add(tf.matmul(current_input, W_en), biases_en)) # Compressed representation (?,50)
with tf.variable_scope('Decode_Z-%d' % layer_i) as scope:
W_dc, wd_dc = _variable_with_weight_decay('W', shape=[self._z.get_shape().as_list()[1], current_input.get_shape().as_list()[1]],
initializer=tf.truncated_normal_initializer(stddev=0.05), wd=self.l2_reg)
losses.append(wd_dc)
biases_dc = _variable_on_cpu('bias', shape=[current_input.get_shape().as_list()[1]],initializer=tf.constant_initializer(0.00))
current_input = tf.nn.relu(tf.nn.bias_add(tf.matmul(self._z, W_dc), biases_dc))
current_input = tf.reshape(current_input,[-1,height,width,channel])
encoder.reverse()
shapes.reverse()
for layer_i, shape in enumerate(shapes[1:]):
with tf.variable_scope('Decode_conv-%d' % layer_i) as scope:
W = encoder[layer_i]
b = _variable_on_cpu('bias-%d' % layer_i, shape=[W.get_shape().as_list()[2]], initializer=tf.constant_initializer(0.00))
hh,ww,cc = shape[1], shape[2], shape[3]
output = tf.nn.relu(tf.add( tf.nn.conv2d_transpose(current_input, W, [tf.shape(sent_batch)[0],hh,ww,cc],strides=[1, 2, 2, 1],padding='SAME'), b),name=scope.name)
current_input = output
self._y = current_input
# loss
with tf.variable_scope('loss') as scope:
cross_entropy_loss = tf.reduce_mean(tf.square(current_input - sent_batch))
losses.append(cross_entropy_loss)
self._total_loss = tf.add_n(losses, name='total_loss')
opt = tf.train.AdamOptimizer(0.0001)
grads = opt.compute_gradients(self._total_loss)
self._train_op = opt.apply_gradients(grads)
But the results are not performing well because below two sentence cosine similarity is 0.9895 after getting the latent compressed representation from above model.
Functional disorders of polymorphonuclear neutrophils'
Unspecified fracture of skull, sequela'
And if i take sentences with 2-5 words and the similarity is going up to 0.9999 (suspecting the issue was caused by more default padding values with same uniform distribution from embedding lookups)
Below information may be helpful,
Total of 10,000 training samples with 10 epochs
Used Relu activations
MSE loss function
Adam optimizers
Below is the words distributions of over all sentence [
And finally can anyone suggest what's going wrong? and approach itself is not good to proceed?

Shape must be rank 1 but is rank 2 tflearn error

I am using a DNN provided by tflearn to learn from some data. My data variable has a shape of (6605, 32) and my labels data has a shape of (6605,) which I reshape in the code below to (6605, 1)...
# Target label used for training
labels = np.array(data[label], dtype=np.float32)
# Reshape target label from (6605,) to (6605, 1)
labels = tf.reshape(labels, shape=[-1, 1])
# Data for training minus the target label.
data = np.array(data.drop(label, axis=1), dtype=np.float32)
# DNN
net = tflearn.input_data(shape=[None, 32])
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 1, activation='softmax')
net = tflearn.regression(net)
# Define model.
model = tflearn.DNN(net)
model.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)
This gives me a couple of errors, the first is...
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 1 but is rank 2 for 'strided_slice' (op: 'StridedSlice') with input shapes: [6605,1], [1,16], [1,16], [1].
...and the second is...
During handling of the above exception, another exception occurred:
ValueError: Shape must be rank 1 but is rank 2 for 'strided_slice' (op: 'StridedSlice') with input shapes: [6605,1], [1,16], [1,16], [1].
I have no idea what rank 1 and rank 2 are, so I do not have an idea as to how to fix this issue.
In Tensorflow, rank is the number of dimensions of a tensor (not similar to the matrix rank). As an example, following tensor has a rank of 2.
t1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(t1.shape) # prints (3, 3)
Moreover, following tensor has a rank of 3.
t2 = np.array([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
print(t2.shape) # prints (2, 2, 3)
Since tflearn is build on top of Tensorflow, inputs should not be tensors. I have modified your code as follows and commented where necessary.
# Target label used for training
labels = np.array(data[label], dtype=np.float32)
# Reshape target label from (6605,) to (6605, 1)
labels =np.reshape(labels,(-1,1)) #makesure the labels has the shape of (?,1)
# Data for training minus the target label.
data = np.array(data.drop(label, axis=1), dtype=np.float32)
data = np.reshape(data,(-1,32)) #makesure the data has the shape of (?,32)
# DNN
net = tflearn.input_data(shape=[None, 32])
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 1, activation='softmax')
net = tflearn.regression(net)
# Define model.
model = tflearn.DNN(net)
model.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)
Hope this helps.

Scikit-learn R2 always zero

I'm trying to test my Scikit-learn machine learning algorithm with a simple R^2 score, but for some reason it always returns zero.
import numpy
from sklearn.metrics import r2_score
prediction = numpy.array([0.1567, 4.7528, 1.1260, 0.2294]).reshape(1, -1)
training = numpy.array([0, 3, 1, 0]).reshape(1, -1)
r2 = r2_score(training, prediction, multioutput="raw_values")
print r2
[ 0. 0. 0. 0.]
This is a single four-part value, not four separate values. How do I get proper R^2 scores?
If you are trying to calculate the r2 value between two vectors you should just pass two one dimensional arrays. See the documentation
In the example you provided, the first item is compared to the first item, but note you only have one list in each the prediction and training, so it is calculating R2 for 0.1567 to 0, which is 0, then it calculates it for 4.7528 to 3 which is also 0 and so on... It sounds like you want the R2 for the two vectors like the following:
prediction = numpy.array([0.1567, 4.7528, 1.1260, 0.2294])
training = numpy.array([0, 3, 1, 0])
print(r2_score(training, prediction))
0.472439485
If you have multi-dimensional arrays you can use the multioutput flag to determine what the output should look like:
#modified from the scikit-learn example
y_true = [[0.5, 1], [-1, 1], [7, -6]]
y_pred = [[0, 2], [-1, 2], [8, -5]]
print(r2_score(y_true, y_pred, multioutput='raw_values'))
array([ 0.96543779, 0.90816327])
Here the output is where the first item of each list in y_true is compared to the first item in each list of y_pred, the second item to the second and so on

Resources