Keras backend: argmax if above threshold, else -1 - keras

I would like to create a custom accuracy function that uses argmax for y_pred only if the value at argmax exceeds a threshold, else -1.
In terms of the Keras backed, it would be a modification of sparse_categorical_accuracy:
return backend.cast(
backend.equal(
backend.flatten(y_true),
backend.cast(backend.argmax(y_pred, axis=-1),
backend.floatx())),
backend.floatx())
So, instead of:
backend.argmax(y_pred, axis=-1)
I need a function with the pseudocode logic:
argmax_values = backend.argmax(y_pred, axis=-1)
argmax_values if y_pred[argmax_values] > threshold else -1
As a concrete example, if:
x = [[0.75, 0.25], [0.85, 0.15], [0.5, 0.5], [0.95, 0.05]]
and threshold=0.8, then the result of the desired function would be:
[-1, 0, -1, 0]
How can I achieve this using the Keras backend? My Keras version is 2.2.4, so I do not have access to the TensorFlow 2 backend.

You can use K.switch to conditionally assign values from two different tensors based on a condition. Using K.switch, your desired function would be:
from keras import backend as K
def argmax_w_threshold(y_pred, threshold=0.8):
argmax_values = K.cast(K.argmax(y_pred, axis=-1), K.floatx())
return K.switch(
K.max(y_pred, axis=-1) > threshold,
argmax_values,
-1. * K.ones_like(argmax_values)
)
Note that both tensor in the then and else part of the K.switch must have the same shape, hence the use of K.ones_like.
On your example:
>>> import tensorflow as tf
>>> sess = tf.InteractiveSession()
>>> x = [[0.75, 0.25], [0.85, 0.15], [0.5, 0.5], [0.95, 0.05]]
>>> sess.run(argmax_w_threshold(x))
array([-1., 0., -1., 0.], dtype=float32)

Related

Output of the model depends on the shape of the weights tensor

I want to train the model to sum the three inputs. So it is as simple as possible.
Firstly the weights are initialized randomly. It produces bad error estimate (approx. 0.5)
Then I initialize the weights with zeros. There are two options:
the shape of the weights tensor is [1, 3]
the shape of the weights tensor is [3]
When I choose the 1st option the model still works bad and can't learn this simple formula.
When I choose the 2nd option it works perfect with the error of 10e-12.
Why the result depends on the shape of the weights? Why do I need to initialize the model with zeros to solve this simple problem?
import torch
from torch.nn import Sequential as Seq, Linear as Lin
from torch.optim.lr_scheduler import ReduceLROnPlateau
X = torch.rand((1024, 3))
y = (X[:,0] + X[:,1] + X[:,2])
m = Seq(Lin(3, 1, bias=False))
# 1 option
m[0].weight = torch.nn.parameter.Parameter(torch.tensor([[0, 0, 0]], dtype=torch.float))
# 2 option
#m[0].weight = torch.nn.parameter.Parameter(torch.tensor([0, 0, 0], dtype=torch.float))
optim = torch.optim.SGD(m.parameters(), lr=10e-2)
scheduler = ReduceLROnPlateau(optim, 'min', factor=0.5, patience=20, verbose=True)
mse = torch.nn.MSELoss()
for epoch in range(500):
optim.zero_grad()
out = m(X)
loss = mse(out, y)
loss.backward()
optim.step()
if epoch % 20 == 0:
print(loss.item())
scheduler.step(loss)
First option doesn't learning because it fails with broadcasting: while out.shape == (1024, 1) corresponding targets y has shape of (1024, ). MSELoss, as expected, computes mean of tensor (out - y)^2, which in this case has shape (1024, 1024), clearly wrong objective for this task. At the same time, after applying 2-nd option tensor (out - y)^2 has size (1024, ) and mean of it corresponds to actual mse. Default approach, without explicit changing weights shape (through option 1 and 2), would work if set target shape to (1024, 1) for example by y = y.unsqueeze(-1) after definition of y.

Sklearn.metrics.mean_squared_error() returns negative number

I want to understand why sklearn.metrics.mean_squared_error() returning a negative number?
I know it is not possible but this is what is happening on my machine, actually 2 machines. I am using Python 3.6 and sklearn(0.0).
The code:
from sklearn.metrics import mean_squared_error
predictions = [96271]
test = [35241]
mse = mean_squared_error(test, predictions)
print('MSE: %.3f' % mse)
Ouput: MSE: -570306396.000
Here is the screenshot of debugger showing the negative value:
enter image description here
With the new code the issue seems to just be an int overflow
>>> from sklearn.metrics import mean_squared_error
>>> predictions = [96271]
>>> test = [35241]
>>> mean_squared_error(test, predictions)
-570306396.0
>>> np.float32(96271 - 35241)**2
3724660900
>>> np.int32(96271 - 35241)**2
-570306396
The natural question is when does it break, since built-in python int would not overflow
>>> (96271 - 35241)**2
3724660900
So the problem arises when scikit learn wraps your data into numpy array in
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
which identifies your data type as int32, and outputs np.array(..., dtype=np.int32), which then overflows.
Note that simply making sure things look like floats will work too
>>> from sklearn.metrics import mean_squared_error
>>> predictions = [96271.] # Note the dot!
>>> test = [35241.]
>>> mean_squared_error(test, predictions)
3724660900
The only way MSE can be negative is if you provided sample_weights (or multioutput) that is negative. e.g.
mean_squared_error([0, 0], [1, 0], sample_weight=[-1, 1.2])
-5.000000000000001
since what sklearn does is it first takes square of differences, and then takes a weighted average using
avg = sum(a * weights) / sum(weights)
which can be negative if some weight is negative, but sum is positive.
From the source:
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
check_consistent_length(y_true, y_pred, sample_weight)
output_errors = np.average((y_true - y_pred) ** 2, axis=0,
weights=sample_weight)
if not squared:
output_errors = np.sqrt(output_errors)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
Specifically note the
output_errors = np.average((y_true - y_pred) ** 2, axis=0,
weights=sample_weight)
line, which shows where the negative output can come from.
There is a thread discussing questionable choice of numpy authors to accept negative weights in averaging https://github.com/numpy/numpy/issues/9825, but as it stands now in 2021, the average still does accept these weights, and acts in a way that might surprise people.
I cannot reproduce your error, I am on Python 3.8.5, sklearn 0.24.1, numpy 1.20.1 I get:
mse = mean_squared_error(test, predictions)
print('MSE: %.3f' % mse)
MSE: 3724660900.000
Looking at the numbers, my guess is that they are defaulted to np.int32 in the calculation, so the square of your values exceed 2,147,483,647. You can try:
mean_squared_error(np.float64(test),np.float64(predictions))
Might be good to check your version of numpy / scikit-learn

Numpy and tensorflow RNN shape representation mismatch

I'm building my first RNN in tensorflow. After understanding all the concepts regarding the 3D input shape, I came across with this issue.
In my numpy version (1.15.4), the shape representation of 3D arrays is the following: (panel, row, column). I will make each dimension different so that it is clearer:
In [1]: import numpy as np
In [2]: arr = np.arange(30).reshape((2,3,5))
In [3]: arr
Out[3]:
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
In [4]: arr.shape
Out[4]: (2, 3, 5)
In [5]: np.__version__
Out[5]: '1.15.4'
Here my understanding is: I have two timesteps with each timestep having 3 observations with 5 features in each observation.
However, in tensorflow "theory" (which I believe it is strongly based in numpy) RNN cells expect tensors (i.e. just n-dimensional matrices) of shape [batch_size, timesteps, features], which could be translated to: (row, panel, column) in the numpy "jargon".
As can be seen, the representation doesn't match, leading to errors when feeding numpy data into a placeholder, which in most of the examples and theory is defined like:
x = tf.placeholder(tf.float32, shape=[None, N_TIMESTEPS_X, N_FEATURES], name='XPlaceholder')
np.reshape() doesn't solve the issue because it just rearranges the dimensions, but messes up with the data.
I'm using for the first time the Dataset API, but I encounter the problems once into the session, not in the Dataset API ops.
I'm using the static_rnn method, and everything works well until I have to feed the data into the placeholder, which obviously results in a shape error.
I have tried to change the placeholder shape to shape=[N_TIMESTEPS_X, None, N_FEATURES]. HOWEVER, I'm using the dataset API, and I get errors when making the initializer if I change the Xplaceholder to the shape=[N_TIMESTEPS_X, None, N_FEATURES].
So, to summarize:
First problem: Shape errors with different shape representations.
Second problem: Dataset error when equating the shape representations (I think that either static_rnn or dynamic_rnn would function if this is resolved).
My question is:
¿Is there anything I'm missing in regard to this different representation logic which makes the practice confusing?
¿Could the solution be attained to switching to dynamic_rnn? (although the problems about the shape I encounter are related to the dataset API initializer being fed with shape [N_TIMESTEPS_X, None, N_FEATURES], not with the RNN cell itself.
Thank you very much for your time.
Full code:
'''The idea is to create xt, yt, xval and yval. My numpy arrays to
be fed are of the following shapes:
The 3D xt array has a shape of: (11, 69579, 74)
The 3D xval array has a shape of: (11, 7732, 74)
The yt array has a shape of: (69579, 3)
The yval array has a shape of: (7732, 3)
'''
N_TIMESTEPS_X = xt.shape[0] ## The stack number
BATCH_SIZE = 256
#N_OBSERVATIONS = xt.shape[1]
N_FEATURES = xt.shape[2]
N_OUTPUTS = yt.shape[1]
N_NEURONS_LSTM = 128 ## Number of units in the LSTMCell
N_NEURONS_DENSE = 64 ## Number of units in the Dense layer
N_EPOCHS = 600
LEARNING_RATE = 0.1
### Define the placeholders anda gather the data.
train_data = (xt, yt)
validation_data = (xval, yval)
## We define the placeholders as a trick so that we do not break into memory problems, associated with feeding the data directly.
'''As an alternative, you can define the Dataset in terms of tf.placeholder() tensors, and feed the NumPy arrays when you initialize an Iterator over the dataset.'''
batch_size = tf.placeholder(tf.int64)
x = tf.placeholder(tf.float32, shape=[None, N_TIMESTEPS_X, N_FEATURES], name='XPlaceholder')
y = tf.placeholder(tf.float32, shape=[None, N_OUTPUTS], name='YPlaceholder')
# Creating the two different dataset objects.
train_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((x,y)).batch(BATCH_SIZE)
# Creating the Iterator type that permits to switch between datasets.
itr = tf.data.Iterator.from_structure(train_dataset.output_types, train_dataset.output_shapes)
train_init_op = itr.make_initializer(train_dataset)
validation_init_op = itr.make_initializer(val_dataset)
next_features, next_labels = itr.get_next()
### Create the graph
cellType = tf.nn.rnn_cell.LSTMCell(num_units=N_NEURONS_LSTM, name='LSTMCell')
inputs = tf.unstack(next_features, N_TIMESTEPS_X, axis=0)
'''inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size]'''
RNNOutputs, _ = tf.nn.static_rnn(cell=cellType, inputs=inputs, dtype=tf.float32)
predictionsLayer = tf.layers.dense(inputs=tf.layers.batch_normalization(RNNOutputs[-1]), units=N_NEURONS_DENSE, activation=None, name='Dense_Layer')
### Define the cost function, that will be optimized by the optimizer.
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=predictionsLayer, labels=next_labels, name='Softmax_plus_Cross_Entropy'))
optimizer_type = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE, name='AdamOptimizer')
optimizer = optimizer_type.minimize(cost)
### Model evaluation
correctPrediction = tf.equal(tf.argmax(predictionsLayer,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correctPrediction,tf.float32))
#confusionMatrix = tf.confusion_matrix(next_labels, predictionsLayer, num_classes=3, name='ConfMatrix')
N_BATCHES = train_data[0].shape[0] // BATCH_SIZE
## Saving variables so that we can restore them afterwards.
saver = tf.train.Saver()
save_dir = '/home/zmlaptop/Desktop/tfModels/{}_{}'.format(cellType.__class__.__name__, datetime.now().strftime("%Y%m%d%H%M%S"))
os.mkdir(save_dir)
varDict = {'nTimeSteps':N_TIMESTEPS_X, 'BatchSize': BATCH_SIZE, 'nFeatures':N_FEATURES,
'nNeuronsLSTM':N_NEURONS_LSTM, 'nNeuronsDense':N_NEURONS_DENSE, 'nEpochs':N_EPOCHS,
'learningRate':LEARNING_RATE, 'optimizerType': optimizer_type.__class__.__name__}
varDicSavingTxt = save_dir + '/varDict.txt'
modelFilesDir = save_dir + '/modelFiles'
os.mkdir(modelFilesDir)
logDir = save_dir + '/TBoardLogs'
os.mkdir(logDir)
acc_summary = tf.summary.scalar('Accuracy', accuracy)
loss_summary = tf.summary.scalar('Cost_CrossEntropy', cost)
summary_merged = tf.summary.merge_all()
with open(varDicSavingTxt, 'w') as outfile:
outfile.write(repr(varDict))
with tf.Session() as sess:
tf.set_random_seed(2)
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter(logDir + '/train', sess.graph)
validation_writer = tf.summary.FileWriter(logDir + '/validation')
# initialise iterator with train data
sess.run(train_init_op, feed_dict = {x : train_data[0], y: train_data[1], batch_size: BATCH_SIZE})
print('¡Training starts!')
for epoch in range(N_EPOCHS):
batchAccList = []
tot_loss = 0
for batch in range(N_BATCHES):
optimizer_output, loss_value, summary = sess.run([optimizer, cost, summary_merged])
accBatch = sess.run(accuracy)
tot_loss += loss_value
batchAccList.append(accBatch)
if batch % 10 == 0:
train_writer.add_summary(summary, batch)
epochAcc = tf.reduce_mean(batchAccList)
if epoch%10 == 0:
print("Epoch: {}, Loss: {:.4f}, Accuracy: {}".format(epoch, tot_loss / N_BATCHES, epochAcc))
#confM = sess.run(confusionMatrix)
#confDic = {'confMatrix': confM}
#confTxt = save_dir + '/confMDict.txt'
#with open(confTxt, 'w') as outfile:
# outfile.write(repr(confDic))
#print(confM)
# initialise iterator with validation data
sess.run(validation_init_op, feed_dict = {x : validation_data[0], y: validation_data[1], batch_size:len(validation_data[0])})
print('Validation Loss: {:4f}, Validation Accuracy: {}'.format(sess.run(cost), sess.run(accuracy)))
summary_val = sess.run(summary_merged)
validation_writer.add_summary(summary_val)
saver.save(sess, modelFilesDir)
Is there anything I'm missing in regard to this different
representation logic which makes the practice confusing?
In fact, you made a mistake about the input shapes of static_rnn and dynamic_rnn. The input shape of static_rnn is [timesteps,batch_size, features](link),which is a list of 2D tensors of shape [batch_size, features]. But The input shape of dynamic_rnn is either [timesteps,batch_size, features] or [batch_size,timesteps, features] depending on time_major is True or False(link).
Could the solution be attained to switching to dynamic_rnn?
The key is not that you use static_rnn or dynamic_rnn, but that your data shape matches the required shape. The general format of placeholder is like your code is [None, N_TIMESTEPS_X, N_FEATURES]. It's also convenient for you to use dataset API.
You can use transpose()(link) instead of reshape().transpose() will permute the dimensions of an array and won't messes up with the data.
So your code needs to be modified.
# permute the dimensions
xt = xt.transpose([1,0,2])
xval = xval.transpose([1,0,2])
# adjust shape,axis=1 represents timesteps
inputs = tf.unstack(next_features, axis=1)
Other errors should have nothing to do with rnn shape.

How To Calculate F1-Score For Multilabel Classification?

I try to calculate the f1_score but I get some warnings for some cases when I use the sklearn f1_score method.
I have a multilabel 5 classes problem for a prediction.
import numpy as np
from sklearn.metrics import f1_score
y_true = np.zeros((1,5))
y_true[0,0] = 1 # => label = [[1, 0, 0, 0, 0]]
y_pred = np.zeros((1,5))
y_pred[:] = 1 # => prediction = [[1, 1, 1, 1, 1]]
result_1 = f1_score(y_true=y_true, y_pred=y_pred, labels=None, average="weighted")
print(result_1) # prints 1.0
result_2 = f1_score(y_true=y_ture, y_pred=y_pred, labels=None, average="weighted")
print(result_2) # prints: (1.0, 1.0, 1.0, None) for precision/recall/fbeta_score/support
When I use average="samples" instead of "weighted" I get (0.1, 1.0, 0.1818..., None). Is the "weighted" option not useful for a multilabel problem or how do I use the f1_score method correctly?
I also get a warning when using average="weighted":
"UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples."
It works if you slightly add up data:
y_true = np.array([[1,0,0,0], [1,1,0,0], [1,1,1,1]])
y_pred = np.array([[1,0,0,0], [1,1,1,0], [1,1,1,1]])
recall_score(y_true=y_true, y_pred=y_pred, average='weighted')
>>> 1.0
precision_score(y_true=y_true, y_pred=y_pred, average='weighted')
>>> 0.9285714285714286
f1_score(y_true=y_true, y_pred=y_pred, average='weighted')
>>> 0.95238095238095244
The data suggests we have not missed any true positives and have not predicted any false negatives (recall_score equals 1). However, we have predicted one false positive in the second observation that lead to precision_score equal ~0.93.
As both precision_score and recall_score are not zero with weighted parameter, f1_score, thus, exists. I believe your case is invalid due to lack of information in the example.

Scikit-Learn GridSearch custom scoring function

I need to perform kernel pca on a dataset of dimension (5000, 26421) to get a lower dimension representation. To choose the number of components (say k) parameter, I am performing the reduction of the data and reconstruction to the original space and getting the mean square error of the reconstructed and original data for different values of k.
I came across sklearn's gridsearch functionality and want to use it for the above parameter estimation. Since there is no score function for kernel pca, I have implemented a custom scoring function and passing it to Gridsearch.
from sklearn.decomposition.kernel_pca import KernelPCA
from sklearn.model_selection import GridSearchCV
import numpy as np
import math
def scorer(clf, X):
Y1 = clf.inverse_transform(X)
error = math.sqrt(np.mean((X - Y1)**2))
return error
param_grid = [
{'degree': [1, 10], 'kernel': ['poly'], 'n_components': [100, 400, 100]},
{'gamma': [0.001, 0.0001], 'kernel': ['rbf'], 'n_components': [100, 400, 100]},
]
kpca = KernelPCA(fit_inverse_transform=True, n_jobs=30)
clf = GridSearchCV(estimator=kpca, param_grid=param_grid, scoring=scorer)
clf.fit(X)
However, it results in the below error:
/usr/lib64/python2.7/site-packages/sklearn/metrics/pairwise.py in check_pairwise_arrays(X=array([[ 2., 2., 1., ..., 0., 0., 0.],
...., 0., 1., ..., 0., 0., 0.]], dtype=float32), Y=array([[-0.05904257, -0.02796719, 0.00919842, .... 0.00148251, -0.00311711]], dtype=float32), precomp
uted=False, dtype=<type 'numpy.float32'>)
117 "for %d indexed." %
118 (X.shape[0], X.shape[1], Y.shape[0]))
119 elif X.shape[1] != Y.shape[1]:
120 raise ValueError("Incompatible dimension for X and Y matrices: "
121 "X.shape[1] == %d while Y.shape[1] == %d" % (
--> 122 X.shape[1], Y.shape[1]))
X.shape = (1667, 26421)
Y.shape = (112, 100)
123
124 return X, Y
125
126
ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 26421 while Y.shape[1] == 100
Can someone point out what exactly am I doing wrong?
The syntax of scoring function is incorrect. You only need to pass the predicted and truth values for the classifiers. So this is how you declare your custom scoring function :
def my_scorer(y_true, y_predicted):
error = math.sqrt(np.mean((y_true - y_predicted)**2))
return error
Then you can use make_scorer function in Sklearn to pass it to the GridSearch.Be sure to set the greater_is_better attribute accordingly:
Whether score_func is a score function (default), meaning high is good, or a loss function, meaning low is good. In the latter case, the scorer object will sign-flip the outcome of the score_func.
I am assuming you are calculating an error, so this attribute should set as False, since lesser the error, the better:
from sklearn.metrics import make_scorer
my_func = make_scorer(my_scorer, greater_is_better=False)
Then you pass it to the GridSearch :
GridSearchCV(estimator=my_clf, param_grid=param_grid, scoring=my_func)
Where my_clf is your classifier.
One more thing, I don't think GridSearchCV is exactly what you are looking for. It basically accepts data in the form of train and test splits. But here you only want to transform your input data. You need to use Pipeline in Sklearn. Look at the example mentioned here of combining PCA and GridSearchCV.

Resources